Another possible way to accelerate MB set deep zooming

  • 170 Replies
  • 5250 Views

0 Members and 1 Guest are viewing this topic.

Offline gerrit

  • *
  • 3f
  • ******
  • Posts: 1477
« Reply #75 on: June 15, 2018, 04:21:05 AM »
Great that you are making progress on this. Any chance of a .exe of nanoMB to play with?

Offline claude

  • *
  • 3c
  • ***
  • Posts: 823
    • mathr.co.uk
« Reply #76 on: June 15, 2018, 03:20:31 PM »
Here's a 64bit EXE with (hopefully all) the DLL deps.  Tested briefly in WINE on Linux.  License of this EXE must be GPL3 as inherited from libraries used, so please don't redistribute without the sources (which are included in the zip).

Compiling your own would allow you to add -march=native to use less "lowest common denominator" CPU features, so I recommend that path.

Offline claude

  • *
  • 3c
  • ***
  • Posts: 823
    • mathr.co.uk
« Reply #77 on: June 15, 2018, 04:29:57 PM »
(using 7 threads for Newton's method)

I benchmarked the overhead vs unthreaded:

Code: [Select]
7 threads
real 9m42.095s
user 67m51.002s
sys 0m0.088s

1 thread
real 14m45.536s
user 14m42.206s
sys 0m0.060s

(3 iterations of Newton's method, period 100000, precision 500000 bits, identical results)

So 35% faster wall clock time but 4.5x the effort expended.  Not worth it imo.

Offline gerrit

  • *
  • 3f
  • ******
  • Posts: 1477
« Reply #78 on: June 16, 2018, 01:07:51 AM »
Here's a 64bit EXE with (hopefully all) the DLL deps.  Tested briefly in WINE on Linux.  License of this EXE must be GPL3 as inherited from libraries used, so please don't redistribute without the sources (which are included in the zip).

Compiling your own would allow you to add -march=native to use less "lowest common denominator" CPU features, so I recommend that path.
Thanks, it works well. With M=6 N=3 I haven't managed to find an image where it doesn't work.

I made an attempt at compilation under cygwin but couldn't install GMP due to "recipe failed ERROR 1", whatever that means.

Offline claude

  • *
  • 3c
  • ***
  • Posts: 823
    • mathr.co.uk
« Reply #79 on: June 18, 2018, 06:42:20 PM »
Zoom: 1E1000000 (1e(1e6))
Iterations: 1100100100

Period: 830484
Bits: 3321992

real   269m18.978s
user   2207m56.913s
sys   1m24.861s

4x4 supersampling with jitter (not enough, better than nothing)

Offline gerrit

  • *
  • 3f
  • ******
  • Posts: 1477
« Reply #80 on: June 18, 2018, 07:38:29 PM »
Great, where no-one has gone before :)

Offline knighty

  • *
  • Fractal Feline
  • **
  • Posts: 173
« Reply #81 on: June 21, 2018, 04:48:00 PM »
 :o
 :thumbs:
Thanks for the new version!
Some remarks:
- The computed value of the escape radius is accurate only when using relatively high M and N terms number (say : 8 , 8 ). For the default number of M  and N (4, 4), it is necessary to scale down the computed escape radius by a factor of around 0.25 (I use 0.1 because it doesn't affect performance).
- For odd i's, the coefficients are 0 so it is possible to optimize a little.
- When loading a kfr file, overriding some parameters doesn't work (for example zoom factor).
- Using floatexp type is much slower. Is it possible to perform rescaling "manually"? I remember Pauldebrot said once that it is possible to predict when the rescaling is necessary when using perturbation. For series approximation it is IMHO not really critical.
- The zoom thresholds for float types selection suppose that the (super)SA is scaled which is not implemented: those thresholds need to be lowered for now.

I believe it is possible to extend this method by using the parent nucleus(es) but it is not straightforward. That would give huge speed-ups and may make the glitches problem disappear :) . The fact is that when zooming at an embedded julia set, the shape is the same (up to some almost linear transformation) as if we have rendered the julia set at the same location.

Offline gerrit

  • *
  • 3f
  • ******
  • Posts: 1477
« Reply #82 on: June 21, 2018, 05:55:19 PM »
:o
 :thumbs:
Thanks for the new version!
Some remarks:
- The computed value of the escape radius is accurate only when using relatively high M and N terms number (say : 8 , 8 ). For the default number of M  and N (4, 4), it is necessary to scale down the computed escape radius by a factor of around 0.25 (I use 0.1 because it doesn't affect performance).
- For odd i's, the coefficients are 0 so it is possible to optimize a little.
- When loading a kfr file, overriding some parameters doesn't work (for example zoom factor).
- Using floatexp type is much slower. Is it possible to perform rescaling "manually"? I remember Pauldebrot said once that it is possible to predict when the rescaling is necessary when using perturbation. For series approximation it is IMHO not really critical.
- The zoom thresholds for float types selection suppose that the (super)SA is scaled which is not implemented: those thresholds need to be lowered for now.

I believe it is possible to extend this method by using the parent nucleus(es) but it is not straightforward. That would give huge speed-ups and may make the glitches problem disappear :) . The fact is that when zooming at an embedded julia set, the shape is the same (up to some almost linear transformation) as if we have rendered the julia set at the same location.
I find M,N = (6,3) always works, but (4,4) never. M=2N seems logical as we expand in z^2.

Offline claude

  • *
  • 3c
  • ***
  • Posts: 823
    • mathr.co.uk
« Reply #83 on: June 21, 2018, 09:55:04 PM »
This image needs some imagination. Using KF I went to location \( i \) at zoom \( 10^{500000} \), then located a minibrot of period 1328773 using NR zooming. This took about a month, resulting in the attached kfr. I tried rendering that (3.4GHz CPU's), but after 2 days it has only computed 1% of the reference orbit, and I'm too impatient to wait a year for the result. Maybe someone with a quantum computer wants to take a shot at it?

About 20 hours to compute the reference, single threaded, then 5 hours wall-clock with 16 cores for the pixels.  4x4 supersampling with jitter.  I used M=4 N=4 without any correction to radius, so whether it is totally accurate is a bit suspect...  Much slower than the needle location, I guess MPFR can optimize out all operations with 0.0.

Thanks for the additional info and tips, I may integrate them into my copy.

Offline gerrit

  • *
  • 3f
  • ******
  • Posts: 1477
« Reply #84 on: June 22, 2018, 01:26:39 AM »
Nice. I guess a better algorithm is better than a quantum computer.

Offline gerrit

  • *
  • 3f
  • ******
  • Posts: 1477
« Reply #85 on: June 23, 2018, 07:50:16 AM »
This took 2.5hrs at 19200X10800. I started a KF render 4 hrs ago but it's still showing 0%, so will take > 400 hrs!

Offline gerrit

  • *
  • 3f
  • ******
  • Posts: 1477
« Reply #86 on: June 23, 2018, 08:08:49 AM »
This one does not render correctly at 1920X1080 even up to order (16,16).


Offline pauldelbrot

  • *
  • Fractal Freak
  • **
  • Posts: 717
« Reply #87 on: June 23, 2018, 08:44:00 AM »
Looks like a classic perturbation glitch to me; just needs another reference for those two-arm spiral centers.

Offline claude

  • *
  • 3c
  • ***
  • Posts: 823
    • mathr.co.uk
« Reply #88 on: June 23, 2018, 01:33:11 PM »
Looks like a classic perturbation glitch to me; just needs another reference for those two-arm spiral centers.

True, the regular perturbation part of nanomb (after the super iterations completed) has no glitch detection/correction yet..
Don't know if there needs to be glitch detection/correction for the super iterations part, or how that would look like...

Offline claude

  • *
  • 3c
  • ***
  • Posts: 823
    • mathr.co.uk
« Reply #89 on: June 23, 2018, 06:16:30 PM »
- The computed value of the escape radius is accurate only when using relatively high M and N terms number (say : 8 , 8 ). For the default number of M  and N (4, 4), it is necessary to scale down the computed escape radius by a factor of around 0.25 (I use 0.1 because it doesn't affect performance).
Ok I put in a 0.1*r in my local copy.

Quote
For odd i's, the coefficients are 0 so it is possible to optimize a little.
I'll try this soon. EDIT Seems nontrivial, as it is initialized like this (see line highlighted):
Code: [Select]
biPolyClass(N m, N n): m_m(m), m_n(n) {
for(N l=0; l <= m_m; l++)
for(N c=0; c<= m_n; c++)
tab[l][c] = C_lo(0);
tab[1][0] = C_lo(1); // *****************
}

Quote
When loading a kfr file, overriding some parameters doesn't work (for example zoom factor).
I think I fixed this locally too.

Quote
Using floatexp type is much slower. Is it possible to perform rescaling "manually"? I remember Pauldebrot said once that it is possible to predict when the rescaling is necessary when using perturbation. For series approximation it is IMHO not really critical.
I'm not sure how much it will gain vs how much developer time it will take to implement correctly, at least in KF the scaled-(long)-double only doubles the exponent before you have to switch to the next type.

Quote
The zoom thresholds for float types selection suppose that the (super)SA is scaled which is not implemented: those thresholds need to be lowered for now.
Hmm, do the coefficients overflow to infinity or so? I just copied what I thought made sense from the number type ranges.  How much do they need lowering? Dependent on M,N?  Concrete (tested) suggestions for threshold values would help!
« Last Edit: June 23, 2018, 07:07:47 PM by claude, Reason: odd is 0, except at iteration 0 »


xx
Speeding up deep zooming using neural networks / deep learning?

Started by greentexas on Fractal Mathematics And New Theories

27 Replies
1262 Views
Last post December 13, 2017, 10:43:46 PM
by Byte11
xx
a way to accelerate Mandelbrot (etc) deep zoom world record attempts

Started by claude on Fractal Mathematics And New Theories

4 Replies
298 Views
Last post February 19, 2018, 04:46:10 PM
by claude
xx
How to avoid zooming too deep?

Started by noahfence on Mandelbulber

2 Replies
185 Views
Last post June 10, 2018, 02:47:48 AM
by mclarekin
xx
Mandelbrot set deep zooming in the web browser

Started by claude on Other

0 Replies
310 Views
Last post October 29, 2017, 11:00:15 PM
by claude
xx
Deep water

Started by kohlenstoff on Fractal Image Gallery

0 Replies
34 Views
Last post October 14, 2018, 09:36:02 PM
by kohlenstoff