### Another possible way to accelerate MB set deep zooming

• 213 Replies
• 11356 Views

0 Members and 1 Guest are viewing this topic.

#### knighty #### Another possible way to accelerate MB set deep zooming

« on: September 11, 2017, 12:10:23 AM »
The MB set formula
f(z,c) = z? + c
when iterated for a given center c0 of an hyperbolic component with period n
fn(0,c0) = 0
Yields obviously 0.he

The idea is to approximate fn(z,c) around (z=0, c = c0) by a bivariate polynomial:
p(t,u) == fn(t, c0+u)
The coefficients of that polynomial can be evaluated the same way as for series approximation.
Then when near a minibrot one only have to iterate p(t,u) instead of f(z,c) until t becomes "sufficiently" large. This means that we will do one iteration of p() instead of n iterations of f().
There is more: say we have the list of the minibrots that are "near" the zoomed location. For each minibrot we have the center Ci of it's cardioid and its period Ni. They are obtained by checking the atom domains that are related to the location (filtering out the centers of the disk like hyperbolic components). Of course they are sorted from C0 = (0,0); N = 1 to the deepest minibrot Cn; Nn.

For each such minibrot, we compute the polynimial pi() approximating fNi(t, Ci + u).

Beginning with i=N, instead of iterating f(z,c) we iterate pi(t,u) until t is too large. At that point we use pi-1(). This process continues until i==0 where we use classic MB iterations.

There is an issue that needs to be solved to make this method work: determining for each approximating polynomial the area (in t and u variables) where it is sufficiently accurate... « Last Edit: July 04, 2018, 03:11:29 PM by knighty, Reason: It actually works that way! »

#### claude #### Re: Another possible way to accelerate MB set deep zooming

« Reply #1 on: September 11, 2017, 12:39:02 AM »
Very interesting idea!

For accuracy measurements, I've been wondering about this: https://en.wikipedia.org/wiki/Taylor%27s_theorem#Taylor.27s_theorem_in_complex_analysis  Estimating $M_r$ should be something like $\sum_j \left| \frac{f^{(j)}(c)}{j!} \right| r^j$, i.e., using the absolute values of the series coefficients to form a sum.  Then the truncation error is less than $M_r \frac{\beta^{k+1}}{1 - \beta}$ where $\beta$ is $\frac{|z-c|}{r}$.

I think I tried something like your idea once, in an exponential-map (mercator) renderer, but I'm not sure where that code is now.  The big problem I had was seams at the swtich-over points.  Probably I didn't use a high degree enough polynomials (this was before perturbation and series approximation was well known), just a quadratic for p (the renormalisation step mentioned here: https://mathr.co.uk/blog/2016-12-24_deriving_the_size_estimate.html ).

I have some code that can generate the list of minibrots, if you need it:  https://code.mathr.co.uk/mandelbrot-numerics/blob/HEAD:/c/bin/m-describe.c

#### knighty #### Re: Another possible way to accelerate MB set deep zooming

« Reply #2 on: September 11, 2017, 11:37:04 AM »
Thanks a lot for the code. #### quaz0r #### Re: Another possible way to accelerate MB set deep zooming

« Reply #3 on: November 02, 2017, 08:13:26 PM »
in layman's terms...whats happening here?  (anything?) #### pauldelbrot #### Re: Another possible way to accelerate MB set deep zooming

« Reply #4 on: January 08, 2018, 04:19:44 PM »
I have something similar I've used from time to time. It seems to give good results near deep minibrots. My implementation doesn't use fancy maths, though. It just does this:

1. Start with the center and period of the minibrot at the end of a zoom sequence/video.
2. Calculate exact scale and orientation of the minibrot, using the algorithm described elsewhere.
3. For each point being iterated, first affine transform it to the minibrot's coordinate space (i.e., apply an affine map that sends the minibrot center to 0, the minibrot spike tip to -2, etc.)
4. Iterate this point until a bailout radius is hit, often very large (e.g. 10^73).
5. At this point, transform the point by the inverse of the aforementioned affine transform, and also multiply the iteration count by the minibrot period.
6. Continue iterating until the "normal" bailout radius is hit.
7. Test for the point to be trapped during both phases. If the point is found to be trapped in the first phase, BIG iteration savings.

The tricky bit is the bailout in step 4. If it's too small (much below about 100) the curves of equal smoothed-iteration are not close enough to exactly circular at large distances, and you get seams. If it's too large you get seams or worse problems because it is sampling points from outside the period-doubling cascade around the minibrot (or from inside, but where it's not yet sufficiently close to perfectly rotationally symmetric) in essence.

The best bailout radius to use in step 4 is the power of ten equal to how much bigger than the minibrot is an image frame around the last off-center zoom on the way to that minibrot, OR the square of the size (relative to the minibrot) of the first exactly two-rotationally symmetric image of the "home stretch" after that point, OR if the minibrot is at the center of an embedded Julia the largest image that is inside the central "node" of that Julia: whichever is the farthest zoomed in of those three. I'd guess that the size of the atom domain for the mini might be a good guide -- in the embedded Julia case that domain tends to correspond closely to said central "node".

If the radius is too large in the off-center/2-fold-symmetric case, you get seams; too large in the embedded Julia case, you get Mandelbrot set dendrites or even minibrots in place of embedded Julia structures, because it renders chunks of the minibrot in whose "field" the Julia was found in lieu of that Julia.

The other failure mode is that shapes can be wrong if they are being shape-stacked, or are embedded Julias. The first case happens if e.g. you take a square shape, zoom repeatedly at one corner of the square, then zoom to a period-doubled field about a mini to find a bow tie shape, then zoom into that deeper. If the bailout disk around the central mini is in the center of the bow tie you're fine, but if it includes the ends of the bow tie, the secondary bow ties in the centers of the ends will be squares instead of bow ties, and recursive similar errors will occur at greater depths in the sequence. This is only a concern if the secondary bow ties in the centers of the ends are big enough to be visible in an image that shows the whole bow tie. With a shallow square this is a concern, but with a deep square it can't happen as the next level of bow ties will be smaller than a pixel at any sane rendering resolution. The analogous failure will happen from stretching and conjoining other shapes than squares; I used square->bow tie as just an example.

Where the "true" minibrot has embedded Julias in the field around it if you zoom deeper into twofold areas near the mini, with this method you will get copies of that minibrot itself in place of the Julias. So a) the technique can't be used at all around a mini shallow enough that the embedded Julias around it are bigger than a pixel in an image showing the full mini; b) when zooming past the mini it can only be used down as far as when an embedded Julia associated to that mini would be close to coming into view (becoming bigger than 1/100 pixel or so). Past that point, deviations in the shapes of the dwell bands around the Julia shape vs. around a Mandelbrot shape can become macroscopic (greater than 1 pixel deviation size).

Needless to say, a mini that is noticeably asymmetrical can't be used as the target. Minis much shallower than the transition between double and arbitrary precision cannot, in general, be used. The deviation between the transformation that maps the mini onto the main set and an affine approximation (first, linear term in your Taylor series, essentially) cannot be sufficient to make distortions bigger than a pixel in size or it won't work (you'll get seams again in that case).

This method has been tested and, with the limitations noted above, works reasonably well to massively speed up rendering about deep minis, especially views where the mini's interior occupies multiple image pixels. There are multiple contributing factors. Let's consider a hypothetical example of a deep mini at 1e200 with the last off-center zoom around 1e100, from deep structure stacking rather than an embedded Julia center; its period is 4000. The image around the last off-center zoom has iterations averaging 20,000.

1. If deeper images about this center are rendered without this technique, the iterations range from 20,000 to 24,000 down to the 3/4 mark; 24,000 to 28,000 down to the 7/8 mark; in the millions once near the minibrot; and you'll need billions to render the minibrot border accurately. With this speedup, if you want the minibrot border rendered as accurately as without it with 4 billion iterations as max, the iterations done in phase 1 range from 1 up to one million, and the iterations done in phase 2 range up to about 20,000. Except right at the end of the sequence when the minibrot itself occupies a positive number of pixels, the number of first-phase iterations tends to be no more than a handful.

2. The first set of iterations can be done with doubles rather than arbitrary precision. And is the potentially most numerous. Unless you zoom far enough past the target minibrot that zooming that far past the main set would require arbitrary precision.

3. The second set of iterations can be done with half the bit precision (and thus 1/4 the multiply times) as to do a naive render job on a closeup of the minibrot. The precision of these iterations over the whole zoom sequence rises to this point at the halfway mark, then stops increasing instead of continuing to increase linearly over the "back half".

4. The second set can be done using perturbation theory, so the two giant speedups can be combined.

Note that item 3 alone reduces the time for the deepest images to 1/4 what it would be. Items 1 and 2, though, really help: item 1 is already giving its own fourfold speedup by the time you're at 16-fold symmetry, and the combination makes rendering a very accurate image of the deep minibrot with very crisp edges and no "black mud leaking out" take minutes (perturbation) or weeks (no perturbation) instead of days (perturbation) or years (no perturbation).

One more caveat: zooming past the target mini and then switching this off again to zoom to an even deeper mini runs into a new problem: though the shapes in each individual frame of the sequence will be accurate, there's a "drift" caused by the affine transform of mini to full set being an approximation to a more complex mapping. As a result going deep enough past the mini and then switching this speedup off will result in being "teleported", potentially, some distance through the shapes in the dwell bands around the mini. The best way to avoid this is likely to be to a) switch it off as soon as feasible, e.g. as soon as no more minibrot interior is in the frame, and b) if necessary jump the center coordinates at that same point so as to recenter on the correct structure.
« Last Edit: January 08, 2018, 04:32:14 PM by pauldelbrot »

#### knighty #### Re: Another possible way to accelerate MB set deep zooming

« Reply #5 on: January 09, 2018, 08:58:54 AM »
Thanks a lot Paul.

#### pauldelbrot #### Re: Another possible way to accelerate MB set deep zooming

« Reply #6 on: January 09, 2018, 09:41:21 AM »
Incidentally, I've tried to adapt this method to embedded Julias, without success. I can accurately compute the Julia seed that should be used -- as long as the embedded Julia is tiny compared to its distance from the influencing minibrot so that the seed can be approximated as constant across the width of the embedded Julia without introducing deviations bigger than a pixel. The seed to use is simply the center coordinates of the minibrot below the embedded Julia (at its exact core), affine-transformed into the coordinate space of the minibrot above the embedded Julia (the one without which the embedded Julia would not be there).

Unfortunately I don't know how to exactly calculate the size and orientation of the embedded Julia. The square root of the transform for its core minibrot seems to be an approximation, but not a workable one, as it may be off by several degrees of rotation and a factor up to 1.2 or so in size, for some reason. (Which branch of the square root? Doesn't matter, in theory, since the embedded Julia has 2-fold rotational symmetry.) I think I also tried A*sqrt(B/A) where B is the transform to put the core mini on the main set and A is the transform to put the (much shallower!) influencing mini on the main set, without success.

#### pauldelbrot #### Re: Another possible way to accelerate MB set deep zooming

« Reply #7 on: January 09, 2018, 09:51:30 AM »
I have a much more complex idea how it might be done but it would take a lot of time, work, and testing to operationalize it. The idea would be to use my non-working estimate (square root of central minibrot transform) as an initial guess of the size and orientation, and use that to derive initiial guesses at two Misiurewicz points: the ones corresponding to the two fixed points in the Julia set when it's just an ordinary non-embedded Julia set. Those have period 1 there, but in the embedded version will have some much larger period. I don't know what, but it's probably some simple function of the period of either the controlling or the core mini (or of both). In any event, armed with initial guesses and the period it would be possible to use Newton-Raphson to locate the two fixed points, and those will give you a choice of exactly two affine transforms for the embedded Julia set, one of which would be sufficiently right. The choice could be determined easily as the core minibrot coordinates would be sent to (approximately) 0 only by the correct one.

In theory, with the two renormalization speedups combined plus sufficiently smart automation to detect when, and over what range of zooms, to apply them, plus layering them atop one another, it would be possible to render "most" deep Mandelbrot locations in minutes, even with magnifications as ludicrous as 10^1,000,000 and such(!) ... the main exceptions being zooms that stay near the border of the big Mandelbrot, or go to a mini and then stay near its border, for a very long time, or that visit rapidly a succession of shallow-relative-to-the-previous-one minibrots.

Of course, perhaps even that limitation would go away by replacing affine transforms with more sophisticated approximations...

#### knighty #### Re: Another possible way to accelerate MB set deep zooming

« Reply #8 on: January 10, 2018, 01:08:45 PM »
In theory, with the two renormalization speedups combined plus sufficiently smart automation to detect when, and over what range of zooms, to apply them, plus layering them atop one another, it would be possible to render "most" deep Mandelbrot locations in minutes, even with magnifications as ludicrous as 10^1,000,000 and such(!) ... the main exceptions being zooms that stay near the border of the big Mandelbrot, or go to a mini and then stay near its border, for a very long time, or that visit rapidly a succession of shallow-relative-to-the-previous-one minibrots.
Yes! such method can go way beyond SA+perturbation.
Of course, perhaps even that limitation would go away by replacing affine transforms with more sophisticated approximations...
Yes, having a polynomial approximation of Fn(z,c) around the nucleus of a minibrot should do the job. one advantage of using the polynomial approximation is that the re-normalisation scaling is not needed. Too bad, I haven't yes done any experiment that may confirm or infirm that idea...

#### gerrit #### Re: Another possible way to accelerate MB set deep zooming

« Reply #9 on: January 26, 2018, 07:43:01 AM »
The MB set formula
f(z,c) = z? + c
when iterated for a given center c0 of an hyperbolic component with period n
fn(0,c0) = 0
Yields obviously 0.he

The idea is to approximate fn(z,c) around (z=0, c = c0) by a bivariate polynomial:
p(t,u) == fn(t, c0+u)
The coefficients of that polynomial can be evaluated the same way as for series approximation.
Then when near a minibrot one only have to iterate p(t,u) instead of f(z,c) until t becomes "sufficiently" large. This means that we will do one iteration of p() instead of n iterations of f().
There is more: say we have the list of the minibrots that are "near" the zoomed location. For each minibrot we have the center Ci of it's cardioid and its period Ni. They are obtained by checking the atom domains that are related to the location (filtering out the centers of the disk like hyperbolic components). Of course they are sorted from C0 = (0,0); N = 1 to the deepest minibrot Cn; Nn.

For each such minibrot, we compute the polynimial pi() approximating fNi(t, Ci + u).

Beginning with i=N, instead of iterating f(z,c) we iterate pi(t,u) until t is too large. At that point we use pi-1(). This process continues until i==0 where we use classic MB iterations.

There is an issue that needs to be solved to make this method work: determining for each approximating polynomial the area (in t and u variables) where it is sufficiently accurate... Hmm, would it not make more sense to start with the lowest period p? It doesn't sound very complicated to try out.
First find your center c0 and period p. Then get the coefficients of the double polynomial in z and b = c-c0 (say order K in z and M in b), thus defining fp(z,b).

Next you basically just render a fractal as normal with fractal function fp(z,b) and every iteration of fp should approximate p iterations of z^2+c.
Now if the orbit escapes somewhere at iteration n, we have to wind back to n-1, and from there on continue with normal perturbation theory calculation of z^2+c until it escapes, to determine actual escape iters. Total escape iters will be p*(n-1) + normal iters.
Interior points just don't escape and nothing extra needs to be done.

I guess the first step would be just to try this and see how it looks for a not too complicated mini in a rectangle.

How to control errors though? The starting point $$f^{(n-1)}_p(b)$$ for the normal PT needs to be good enough, and points that don't escape according to fp should really not escape. Maybe some probe method, picking a few points each time a group of points escape under fp iterations?

Another thing I haven't figured out is how to compute the bivariate coefficients in normal precision as the recursion formula contains c0 explicitly.
I think perhaps we have to construct fp from the PT iteration $$w \leftarrow w(w+2z_{ref})+b$$. More messy formulas but the same idea I guess.

Another option would be to just accept whatever the fp polynomial gives you, which will be an interesting fractal. We could call it the (K,M) associated fractal and we have one for every mini in the M set .

#### gerrit #### Re: Another possible way to accelerate MB set deep zooming

« Reply #10 on: January 26, 2018, 06:29:28 PM »
I played around with it with attached results. Some math:
Nucleus center is c0, reference orbit I call $$x_n$$, period p=53 in this case, so x(p)=0.
$$z \leftarrow z^2+c$$, $$c = c_0 + b$$, $$z_n = w_n+b_n$$, so PT iters are
$$w_{n+1} = w_n(w_n+2 x_n) + b$$

Bivariate expansion in terms of scaled $$\hat{b} = b/b_{max}$$, $$b_{max} = \max(abs(b))$$.

Expansion is
$$f^{(n)}(w,b) = \sum_{i=0}^K \sum_{j=0}^M a_{ij}(n)w^i (b/b_{max})^j$$.

From that recursion for coefficients a is:
Init:
$$a_{01}(1) = b_{max}$$
$$a_{20}(1) = 1$$
$$a_{10}(1) = 2x(0) = 0$$
Recursion:
$a_{ml}(n+1) = 2x(n)a_{ml}(n)+b_{max}\delta_{m0}\delta_{l1} +\sum_{i=0}^K \sum_{j=0}^M a_{ij}(n)a_{m-i,l-j}(n)$
with $$\delta_{ij}$$ Kronecker delta, 1 if i=j, 0 otherwise.

Using $$f^{(p)}$$ and just iterating this in the normal way gives attached results for a few (K,M) combinations I tried. Doesn't look too encouraging I think, but I could be doing something wrong (like evaluating the polynomial naively instead of some variant of Horner).

#### knighty #### Re: Another possible way to accelerate MB set deep zooming

« Reply #11 on: January 27, 2018, 05:26:03 PM »
Interesting! it is not discouraging at all. The minibrot looks like a "perturbed" MB. That is, the shape that one obtains if he begins the iteration a a z0 different from 0. Are you iterating f(n) until escaping. I think the escape radius for f(n) in this particular case should be around 0.003 (the square root of the size of the first embedded julia set). After that PT should finish the job.

I don't really understand your derivation. I think going through PT is not necessary... I have to check my formulas before posting them.

Unfortunately, it seem that this method need to be used along with PT.
The process described in the OP is correct only for the case where the nucleus' periods are multiple of each other. That is when zooming in a minibrot's minibrot's minibrot... not a minibrot in the decoration of other minibrots. That last case (maybe the most interesting) leads to a lot of complications.
(Edit: noooo! it actually works! )
« Last Edit: July 04, 2018, 03:25:23 PM by knighty »

#### gerrit #### Re: Another possible way to accelerate MB set deep zooming

« Reply #12 on: January 27, 2018, 09:01:05 PM »
Interesting! it is not discouraging at all. The minibrot looks like a "perturbed" MB. That is, the shape that one obtains if he begins the iteration a a z0 different from 0.
The distorted shape does not change with polynomial order, so most likely I have something off by 1. a is stored as size (K+1)X(M+1) array with indices 0,..,K in my math and 1,..,K+1 in MATLAB. I bet I messed that up. So maybe its not discouraging.

Are you iterating f(n) until escaping. I think the escape radius for f(n) in this particular case should be around 0.003 (the square root of the size of the first embedded julia set). After that PT should finish the job.

Escape radius is 1e5. This will be same for fp and for normal iterations (as fp is just a fast way to compute p iterations, nothing has been renormalized).

(Attached picture for M-set with z0 =1/2. For sure I messed up somewhere.... « Last Edit: January 27, 2018, 09:20:41 PM by gerrit »

#### knighty #### Re: Another possible way to accelerate MB set deep zooming

« Reply #13 on: January 27, 2018, 09:58:44 PM »
to check that the approximating polynomial is correct, verify that the first coefficient is 0 (or very small value) and that the corfficient in odd powers of b are also very small (ideally 0). The first is obvisously because c0 have period n and the second comes from the fact that z0=0 (I don't have a formal proof, just a verification on a small example).

Escape radius is 1e5. This will be same for fp and for normal iterations (as fp is just a fast way to compute p iterations, nothing has been renormalized).
That may explain the results.
Don't forget that fp is just an approximation that is valid only for small w and b. Try to use a bailout of 0.003 (its is just a safe value) for the fp iterations then the normal bailout for the PT iterations.

#### gerrit #### Re: Another possible way to accelerate MB set deep zooming

« Reply #14 on: January 27, 2018, 10:19:56 PM »
Found the bug, see attached for various (K,M). Iterating till 1e5, except (4,3)b uses bailout rad = 0.003, (4,3) also and then continues normal PT.
Where did you get the number 0.003 from? Seems pretty good now. But (K,M)=(2,1) (normal renormalization approx) seems as good as using higher order.
Let me try a warped mini next.

### Similar Topics ###### Speeding up deep zooming using neural networks / deep learning?

Started by greentexas on Fractal Mathematics And New Theories

27 Replies
2144 Views December 13, 2017, 10:43:46 PM
by Byte11 ###### a way to accelerate Mandelbrot (etc) deep zoom world record attempts

Started by claude on Fractal Mathematics And New Theories

4 Replies
632 Views February 19, 2018, 04:46:10 PM
by claude ###### How to avoid zooming too deep?

Started by noahfence on Mandelbulber

2 Replies
478 Views June 10, 2018, 02:47:48 AM
by mclarekin ###### Mandelbrot set deep zooming in the web browser

Started by claude on Other

0 Replies
480 Views October 29, 2017, 11:00:15 PM
by claude ###### Deep n spacey

Started by timemit on Fractal Image Gallery

1 Replies
159 Views November 10, 2018, 01:09:26 PM
by Sabine62