Another possible way to accelerate MB set deep zooming

  • 84 Replies
  • 2273 Views

0 Members and 2 Guests are viewing this topic.

Offline gerrit

  • *
  • 3e
  • *****
  • Posts: 1042
« Reply #75 on: June 15, 2018, 04:21:05 AM »
Great that you are making progress on this. Any chance of a .exe of nanoMB to play with?

Offline claude

  • *
  • Fractal Frankfurter
  • *
  • Posts: 587
    • mathr.co.uk
« Reply #76 on: June 15, 2018, 03:20:31 PM »
Here's a 64bit EXE with (hopefully all) the DLL deps.  Tested briefly in WINE on Linux.  License of this EXE must be GPL3 as inherited from libraries used, so please don't redistribute without the sources (which are included in the zip).

Compiling your own would allow you to add -march=native to use less "lowest common denominator" CPU features, so I recommend that path.

Offline claude

  • *
  • Fractal Frankfurter
  • *
  • Posts: 587
    • mathr.co.uk
« Reply #77 on: June 15, 2018, 04:29:57 PM »
(using 7 threads for Newton's method)

I benchmarked the overhead vs unthreaded:

Code: [Select]
7 threads
real 9m42.095s
user 67m51.002s
sys 0m0.088s

1 thread
real 14m45.536s
user 14m42.206s
sys 0m0.060s

(3 iterations of Newton's method, period 100000, precision 500000 bits, identical results)

So 35% faster wall clock time but 4.5x the effort expended.  Not worth it imo.

Offline gerrit

  • *
  • 3e
  • *****
  • Posts: 1042
« Reply #78 on: June 16, 2018, 01:07:51 AM »
Here's a 64bit EXE with (hopefully all) the DLL deps.  Tested briefly in WINE on Linux.  License of this EXE must be GPL3 as inherited from libraries used, so please don't redistribute without the sources (which are included in the zip).

Compiling your own would allow you to add -march=native to use less "lowest common denominator" CPU features, so I recommend that path.
Thanks, it works well. With M=6 N=3 I haven't managed to find an image where it doesn't work.

I made an attempt at compilation under cygwin but couldn't install GMP due to "recipe failed ERROR 1", whatever that means.

Offline claude

  • *
  • Fractal Frankfurter
  • *
  • Posts: 587
    • mathr.co.uk
« Reply #79 on: June 18, 2018, 06:42:20 PM »
Zoom: 1E1000000 (1e(1e6))
Iterations: 1100100100

Period: 830484
Bits: 3321992

real   269m18.978s
user   2207m56.913s
sys   1m24.861s

4x4 supersampling with jitter (not enough, better than nothing)

Offline gerrit

  • *
  • 3e
  • *****
  • Posts: 1042
« Reply #80 on: June 18, 2018, 07:38:29 PM »
Great, where no-one has gone before :)

Offline knighty

  • *
  • Fractal Friar
  • *
  • Posts: 140
« Reply #81 on: Yesterday at 04:48:00 PM »
 :o
 :thumbs:
Thanks for the new version!
Some remarks:
- The computed value of the escape radius is accurate only when using relatively high M and N terms number (say : 8 , 8 ). For the default number of M  and N (4, 4), it is necessary to scale down the computed escape radius by a factor of around 0.25 (I use 0.1 because it doesn't affect performance).
- For odd i's, the coefficients are 0 so it is possible to optimize a little.
- When loading a kfr file, overriding some parameters doesn't work (for example zoom factor).
- Using floatexp type is much slower. Is it possible to perform rescaling "manually"? I remember Pauldebrot said once that it is possible to predict when the rescaling is necessary when using perturbation. For series approximation it is IMHO not really critical.
- The zoom thresholds for float types selection suppose that the (super)SA is scaled which is not implemented: those thresholds need to be lowered for now.

I believe it is possible to extend this method by using the parent nucleus(es) but it is not straightforward. That would give huge speed-ups and may make the glitches problem disappear :) . The fact is that when zooming at an embedded julia set, the shape is the same (up to some almost linear transformation) as if we have rendered the julia set at the same location.

Offline gerrit

  • *
  • 3e
  • *****
  • Posts: 1042
« Reply #82 on: Yesterday at 05:55:19 PM »
:o
 :thumbs:
Thanks for the new version!
Some remarks:
- The computed value of the escape radius is accurate only when using relatively high M and N terms number (say : 8 , 8 ). For the default number of M  and N (4, 4), it is necessary to scale down the computed escape radius by a factor of around 0.25 (I use 0.1 because it doesn't affect performance).
- For odd i's, the coefficients are 0 so it is possible to optimize a little.
- When loading a kfr file, overriding some parameters doesn't work (for example zoom factor).
- Using floatexp type is much slower. Is it possible to perform rescaling "manually"? I remember Pauldebrot said once that it is possible to predict when the rescaling is necessary when using perturbation. For series approximation it is IMHO not really critical.
- The zoom thresholds for float types selection suppose that the (super)SA is scaled which is not implemented: those thresholds need to be lowered for now.

I believe it is possible to extend this method by using the parent nucleus(es) but it is not straightforward. That would give huge speed-ups and may make the glitches problem disappear :) . The fact is that when zooming at an embedded julia set, the shape is the same (up to some almost linear transformation) as if we have rendered the julia set at the same location.
I find M,N = (6,3) always works, but (4,4) never. M=2N seems logical as we expand in z^2.

Offline claude

  • *
  • Fractal Frankfurter
  • *
  • Posts: 587
    • mathr.co.uk
« Reply #83 on: Yesterday at 09:55:04 PM »
This image needs some imagination. Using KF I went to location \( i \) at zoom \( 10^{500000} \), then located a minibrot of period 1328773 using NR zooming. This took about a month, resulting in the attached kfr. I tried rendering that (3.4GHz CPU's), but after 2 days it has only computed 1% of the reference orbit, and I'm too impatient to wait a year for the result. Maybe someone with a quantum computer wants to take a shot at it?

About 20 hours to compute the reference, single threaded, then 5 hours wall-clock with 16 cores for the pixels.  4x4 supersampling with jitter.  I used M=4 N=4 without any correction to radius, so whether it is totally accurate is a bit suspect...  Much slower than the needle location, I guess MPFR can optimize out all operations with 0.0.

Thanks for the additional info and tips, I may integrate them into my copy.

Offline gerrit

  • *
  • 3e
  • *****
  • Posts: 1042
« Reply #84 on: Today at 01:26:39 AM »
Nice. I guess a better algorithm is better than a quantum computer.


xx
Speeding up deep zooming using neural networks / deep learning?

Started by greentexas on Fractal Mathematics And New Theories

27 Replies
1125 Views
Last post December 13, 2017, 10:43:46 PM
by Byte11
xx
a way to accelerate Mandelbrot (etc) deep zoom world record attempts

Started by claude on Fractal Mathematics And New Theories

4 Replies
249 Views
Last post February 19, 2018, 04:46:10 PM
by claude
xx
How to avoid zooming too deep?

Started by noahfence on Mandelbulber

2 Replies
101 Views
Last post June 10, 2018, 02:47:48 AM
by mclarekin
xx
Mandelbrot set deep zooming in the web browser

Started by claude on Other

0 Replies
270 Views
Last post October 29, 2017, 11:00:15 PM
by claude
xx
Deep n brassy

Started by timemit on Fractal Image Gallery

0 Replies
167 Views
Last post November 09, 2017, 07:11:35 PM
by timemit