• June 18, 2021, 08:20:28 AM

Login with username, password and session length

Author Topic:  Another possible way to accelerate MB set deep zooming  (Read 15094 times)

0 Members and 1 Guest are viewing this topic.

Offline vasyan

  • Strange Attractor
  • ******
  • Posts: 90
    • stone voices
Re: Another possible way to accelerate MB set deep zooming
« Reply #210 on: January 20, 2020, 11:36:22 AM »
After numerous experiments and coding works, he made his own fork of this project, referred to as a Nanobrot;)



Offline claude

  • 3f
  • ******
  • Posts: 1915
    • mathr.co.uk
Re: Another possible way to accelerate MB set deep zooming
« Reply #211 on: January 31, 2020, 09:57:15 AM »
NanoMB2 failure modes

1: all ok deep near minibrot it seems
31: but zooming out, seam visible at transition from spirals (outside) into spokes (inside)
62: zooming out, seam again, also precision loss artifacts (the default image-space log-de colouring means edges of equal-iteration-count areas are coloured while the regions themselves are white)
15: noticing that the seams seem to occur at the end of the spirals, i found another seam deeper than the previous two
125: the artifacts seem to get more severe when zooming out
00250_9.48e080.png (not attached) is where the precision loss grid is worst (and axis-aligned) (it's at the transition between zooming "off-center" and starting to go to the final mini)

So there are (at least) two ways NanoMB2 can go wrong:

1. precision loss when zooming too far "off-center" (eg zooming deeply towards a Misiurewicz point before heading down toward the next minibrot)

2. seams when the nested references don't line up properly

3. rendering is much slower during the parts with the echoes of deep Misiurewicz spirals than the other parts.

Ideas for detecting them

Problem 1 can be detected before rendering by comparing the super-escape-radius of the inner mini, relative to the distance from the inner mini's to the outer mini's nuclei. Something like "|r| / |c_i -c_o| < 1e-16 * image dimension in pixels" should do the trick.  If none of the adjacent pairs of reference minis meet this criterion, it should be ok to render the video.  Maybe this criterion could also be adjusted give an estimate of "how much precision is needed in the worst case", something like  bits = log2 (image dimension) - ( log2 ( |r| / |c_i - c_o| )

Problem 2 could maybe be detected automatically after rendering by checking distance estimates for seams (neighbouring pixels having DE that differ by too much, using triangle inequalities and 1/4 theorem).  Guessing that the seam occurs at the edge of the super-escape-radius and its echoes (r, sqrt(r * size), sqrt(sqrt(r * size) * size), ...) closer towards the mini might mean you might be able to pre-render just two rings of pixels at the expected seam location(s) and check if all is ok before doing the whole video?

Problem 3 could maybe be estimated by rendering a ring of pixels just outside each expected seam location and seeing if their super-iteration-count is large.

Untested ideas for fixing them

Problem 1: use more precision (simple, may be slow). alternatively: figure out some kind of super-perturbation (probably super-hard)

Problem 2: use the ring-of-pixels seam detection mechanism above iteratively, to auto-tune the best super-escape-radius adjustment factor for each mini.  making the super-escape-radius too much smaller might need more precision (problem 1 again) and also more super-iterations (slower)

Problem 3: Misiurewicz points have repelling dynamics and are pre-periodic, around them the M-set is asymptotically self-similar (i.e. a spiral) and the multiplier (derivative of 1 cycle of the periodic part w.r.t. z) gives the (complex) scale factor (including rotation) for the similarity.  So maybe this could be fixed by somehow using Misiurewicz points as additional references, and this might also fix 1 if we're lucky?

Recap: near (small z, c) minibrot of period p nanomb2 computes a series for f^p(z, c), do that until it escapes beyond a super-escape-radius (sqrt(size(mini))?) then shift to an outer minibrot and repeat the process.  So we want a series for a Misiurewicz point with preperiod q and period p, that computes f^p(z, c) for nearby small z, c.  I think that it's just f^p(z, c) = m z where m is the multiplier.  If we could find an appropriate super-escape-radius r for the Misiurewicz point, then we could use logs to quickly find the smallest n such that |m^n z| > r : n = ceil((log r - log |z|) / log m).

Offline claude

  • 3f
  • ******
  • Posts: 1915
    • mathr.co.uk
Re: Another possible way to accelerate MB set deep zooming
« Reply #212 on: January 31, 2020, 12:00:41 PM »
Maybe this criterion could also be adjusted give an estimate of "how much precision is needed in the worst case", something like  bits = log2 (image dimension) - ( log2 ( |r| / |c_i - c_o| )

I think so:  in NanoMB2 code, after it prints out the infos for each SSA, this code can calculate the required precision:

Code: [Select]
int bits = int(ceil(-log2(min(2.0L, SSA.getEscRadius()) / (width * abs(HCcentre - HCcentre_old)))));
bits_required = max(bits_required, bits);
HCcentre_old = HCcentre;
cout << "\t Precision: " << bits << endl;

Then compare bits_required with your data type to see if you can render with it safely.

For example, the location above (with R_lo = long double and width = 512px) gives "precision is sufficient: 59 <= 64", and the precision required is too high for double's 53 bits (as used in floatexp in KF's NanoMB2 implementation).  Another location (the same, but zoomed in even further on the spiral before the final minibrot dive) gives "WARNING: insufficient precision:  172 >  64" and trying to render it with long double gives big blank regions in both emndl's NanoMB2 and KF's NanoMB2.

Offline claude

  • 3f
  • ******
  • Posts: 1915
    • mathr.co.uk
Re: Another possible way to accelerate MB set deep zooming
« Reply #213 on: January 31, 2020, 02:16:32 PM »
It seems precision loss is not the only problem: increasing precision (I tried 2kbits) stops the grungy top edge of the green area in emndl-fail.png, but the big gap is still there...

Offline quaz0r

  • Fractal Feline
  • **
  • Posts: 157
Re: Another possible way to accelerate MB set deep zooming
« Reply #214 on: September 23, 2020, 08:34:44 AM »
after not working on this stuff for quite a while, i then also started a complete rewrite of my mandelbrot program.  i haven't quite gotten to the point of acceleration techniques yet, but i am getting close so i started thinking about how i want to approach it, implement the original SA+perturbation technique or just jump right in to what you guys are discussing here.  i went back and read through this whole thread, and i at least have a better idea of what you are even describing than when i first saw this discussion.  :embarrass:  after reading through all this again, i have a few thoughts.  i am not a math scientist like some of you, so please forgive my ignorance if my thoughts or questions are stupid or wrong.

my first observation is that it seems like we came up with a number of different ideas, and proceeded to suffer from the pitfall of "lets implement all the new ideas all at once, and hope it all works out."  then when things go wrong, we aren't sure what exactly went wrong, why, or how to fix it.  this business of utilizing a "super series" to do "super iterations" sounds awesome of course, but it is more of an additional trick that could perhaps be added on to a general purpose algorithm after first nailing down an implementation of that underlying algorithm.

the algorithm for the general case seems to be "identify the series of roots that leads to this location, then using these roots as references, calculate the location by successively handing off to the next reference."  in theory then, the absolute simplest implementation of this algorithm could be by simple perturbation alone.  it seems this should likely also eliminate the phenomenon of perturbation "glitches."

in practice, i suppose you would at least want to start with applying series approximation to this algorithm.  in theory at least, it seems to me that you should be able to simply recycle the techniques we've already developed for SA: perform the SA with the current reference, using whatever stopping condition youve already decided works well, hand off to the next reference, repeat.  once youve worked your way out to the top, you should be pretty close and hopefully be able to finish with normal iterations.

if one was to then add super iterations on to this algorithm, it seems to me the situation you would find yourself in is doing some multiple of period iterations, then being left with some number < period iterations to bridge the gap to the next reference.  you would then bridge this gap with normal SA (and/or non-SA iterations if necessary, but hopefully SA always gets you there).  if the period of the current reference is an exact multiple of the next reference, then perhaps you could do a direct handoff between super series?

indeed after reading through this thread again, this seems to be the primary point of confusion, debate, and implementation failure, is the handoff between super series.  it seems to my mind that gerrit must have been right when he was first insisting that you cant just hand straight off between super series unless perhaps the period is an exact multiple of the other.  knighty repeatedly insisted that you should be able to just immediately hand off to the next reference at any point along the way, because you are still "inside its escape radius."  unless i am missing something, that sounds obviously backwards to me.  you are trying to escape the current reference's influence into the next, not simply stay within the escape radius of the next.  if the criterion was in fact just staying inside the escape radius of an outer mini, then that criterion is always satisfied and you could zoom to some infinite depth and simply use the outermost mini as reference, and this is obviously backward.  bearing in mind also that if we had a nickel for every time knighty said "i did a mistake", we would all be rich.  O:-)

on the matter of identifying a proper stopping condition for a super series, this business of attempting to divine a magic "escape radius" sounds suspect in general.  instead of "did we get there yet," shouldn't it be a "has this failed yet" sort of condition like what is already used with regular SA?  an error condition seems more intuitively correct and attainable versus some ill-conceived success condition that relies on voodoo and hand-tuned magic numbers and the like.  i suspect you will not achieve a truly reliable implementation with the latter.

anyway, it seems that a more proper place to start with all this is to first establish a correct implementation of this algorithm with regular SA.  this would eliminate some obvious complexities and points of failure, like finding periods, sizes, shapes, etc of roots.  can any of that even be reasonably done with near 100% reliability?  once a proper algorithm is identified and implemented for reliably handing off between references using SA (and possibly also non-SA iterations?), adding super iterations on to the algorithm should be much easier to correctly implement and debug.
« Last Edit: September 23, 2020, 01:33:21 PM by quaz0r »

Offline unassigned

  • Fractal Fruit Salad
  • *****
  • Posts: 62
Re: Another possible way to accelerate MB set deep zooming
« Reply #215 on: September 26, 2020, 11:59:35 PM »
anyway, it seems that a more proper place to start with all this is to first establish a correct implementation of this algorithm with regular SA.
A bit off topic, but thinking about this, how computationally expensive is it to actually calculate the roots within the view window? A modified probe method has the potential to be quite computationally efficient. The probe method currently works by iterating some probes using SA and finding the minimum iteration where the approximation is no longer accurate from within the whole window. I think that given we have already calculated some probes this could be used to do 'tiled' SA skipping.

Most of the time of calculating the SA skip (from my program at least) comes from evaluating the SA at each iteration to check it is still valid. Lets say we iterate one probe point fully and find it's maximal skip value. If we assume that this value is within 5% of the minimum skip (or as well at least 1000 below or something) we could start all of the other probes from this iteration and save quite a bit of computation. This could be done in such a fashion that probes are evaluated from the top left slowly to the bottom right using only their neighbor probes for this skip value. Once this is done each of these 'tiles' can be evaluated by using the minimum skip of its vertices.

Tiles that have low maximal skip could have additional references added. I know this is another magic number test but it could have some significant performance gains. I've done some tests where an array of probes is used to check the SA. In the 'wiggly fungus spores' location, using 4(2^2) or 8(3^2 - 1) probes causes overskipping with 1077986 iterations skipped, but only a small amount of the whole image is overskipped. With 224(15^2 - 1) probes there is no overskipping with 1069376 iterations skipped. The difference in the iteration times shows how much this ~10000 iteration difference makes, with 4.8s vs 15.5s for 'iteration' and additional time for glitch correction. If only the required tile used the lower SA skip, I think that the image could be rendered much quicker without overskipping issues.

If finding the root locations and their radii would be faster, maybe the locations of these could be used to determine where and what skipping level could be used for locations within the image. However, I do think that SSA has good potential near minibrots either in its raw form, or as some kind of indicator that can be used to determine how many iterations can be skipped.


Offline unassigned

  • Fractal Fruit Salad
  • *****
  • Posts: 62
Re: Another possible way to accelerate MB set deep zooming
« Reply #216 on: September 29, 2020, 05:09:18 AM »
Just as some additional insight to this, I made a few plots of the residuals (approximation vs. true values) for several different locations and several different amounts of probes. The x axis is the iteration count whilst the y axis is the residual in log scale. Note that the residuals mostly seem to increase and that the decreases seem to be periodic (possibly with same period as central and other minibrots).

For 'wfs' (wiggly fungus spores) location, it can be seen that this is overskipping - in the previous post you can see that values and you can see that this corresponds to the periodic increase which can be seen on the plot at both of these chosen values.

Edit: added image of skipped iterations of wfs location
« Last Edit: September 29, 2020, 08:09:59 AM by unassigned »


xx
Speeding up deep zooming using neural networks / deep learning?

Started by greentexas on Fractal Mathematics And New Theories

27 Replies
2589 Views
Last post December 13, 2017, 10:43:46 PM
by Byte11
xx
a way to accelerate Mandelbrot (etc) deep zoom world record attempts

Started by claude on Fractal Mathematics And New Theories

4 Replies
853 Views
Last post February 19, 2018, 04:46:10 PM
by claude
xx
How to avoid zooming too deep?

Started by noahfence on Mandelbulber

2 Replies
626 Views
Last post June 10, 2018, 02:47:48 AM
by mclarekin
xx
Mandelbrot set deep zooming in the web browser

Started by claude on Other

0 Replies
641 Views
Last post October 29, 2017, 11:00:15 PM
by claude
xx
Deep n spacey

Started by timemit on Fractal Image Gallery

1 Replies
226 Views
Last post November 10, 2018, 01:09:26 PM
by Sabine62