• April 17, 2021, 09:42:38 PM

### Author Topic: (Question) Scaling doubles  (Read 422 times)

0 Members and 1 Guest are viewing this topic.

#### unassigned

• Fractal Phenom
• Posts: 54
##### (Question)Scaling doubles
« on: April 29, 2020, 12:52:13 AM »
Hi everyone, I've been working on my own fractal renderer and am now investigating different methods for deeper zooming. One such method I have seen on these forums is scaling of doubles using an exponent. During my research I found this post from pauldelbrot:

http://www.fractalforums.com/announcements-and-news/superfractalthing-arbitrary-precision-mandelbrot-set-rendering-in-java/105/

It seems this method is similar to the floatexp library that people seem to be using, however looks to be more of a direct implementation. My question is - has this method been implemented successfully, especially with series approximation?

#### claude

• 3f
• Posts: 1830
##### Re: Scaling doubles
« Reply #1 on: April 29, 2020, 01:36:56 AM »
KF's uses floatexp with renormalization of the exponent every arithmetic operation, for zooms deeper than e9800. What Pauldelbrot describes is potentially very much more efficient (renormalization only every 500 iterations, no additional squaring etc), but harder to get right.

KF also uses "rescaled (long) double" at medium zooms (e300-e600, e4900-e9800), by considering an unevaluated product of two doubles, which is a lot faster than floatexp due to hardware support.  Probably the method Pauldelbrot describes is faster there too, but the details seem a bit tricky.

I don't know if Pauldelbrot has ever published an implementation.  MandelMachine might use similar techniques, if you can decipher the assembler.

I suppose any of these methods could be combined with series approximation without any major issues.

#### unassigned

• Fractal Phenom
• Posts: 54
##### Re: Scaling doubles
« Reply #2 on: April 29, 2020, 02:03:16 AM »
Thanks for the information. I'll give it a go.

#### unassigned

• Fractal Phenom
• Posts: 54
##### Re: Scaling doubles
« Reply #3 on: April 29, 2020, 09:00:54 AM »
I've implemented this method and got it working for at least some cases (without series approximation) - the performance is really good, basically like normal double precision however there are some bugs. I think this methods is very dependent on the reference orbit; like pauldelbrot states I use a regular double precision to record the reference (it is calculated with arbitrary precision with precision relative to window radius). Sometimes, the values recorded for the reference will hit zero (or near). I'm not sure if this is a bug on my end but when a zero is recorded in the double precision reference, the formula breaks down, specifically when the delta_0 value has been scaled such that it rounds to 0. Once this happens, the values of the orbit zero out and the same value is present in the entire image.

Edit: I think the solution to this is to have the reference orbit in floatexp form; that should allow for it to work well - the benefit of this method over the normal floatexp method is that the re-scales only happen every 100-400 iterations and that you don't need to consider the squared term in the perturbation iterations until the delta is more than ~1e-150.
« Last Edit: April 30, 2020, 02:18:37 AM by unassigned, Reason: more investigation »

#### unassigned

• Fractal Phenom
• Posts: 54
##### Re: Scaling doubles
« Reply #4 on: April 30, 2020, 08:35:21 AM »
I've got it working.

By storing the reference orbit in both double and floatexp form, we can do a check in the inner loop if the reference value is too small (e.g. within 1e30 of zero). We need to check this because the assumption removing the squared term causes issues specifically if the reference orbit is near zero, where double precision can round to 0 and cause the perturbation iterations to zero out.

If we detect this, we do a regular floatexp iteration; otherwise continue with the approximation and with the doubles. There are quite a few optimisations which can be done with the floatexp iterations (I never actually use a floatexp class, I just use a tuple with a complex and the exponent). At the moment I haven't got series approximation working with it, but since I am already using a floatexp like type for the iterations, it shouldn't be too hard to get working. I think glitch detection using pauldelbrots method is still working as well which is great, attached image below.

#### claude

• 3f
• Posts: 1830
##### Re: Scaling doubles
« Reply #5 on: May 01, 2020, 06:46:29 AM »
Yes I think the assumptions of the formula break down if the reference is 0.
In those cases you need to do a full iteration with the delta^2 part included.
Probably you can analyze the reference orbit up front, so the main per pixel loop is more like
Code: [Select]
for (int i = 0; ; ++i){  int iterations_until_next_reference_zero = iterations_until_next_reference_zero_array[i];  for (int j = 0, k = 0; j < iterations_until_next_reference_zero; ++j, =+k)  {    if (k = 500)    {      k = 0;      // rescale delta_scaled back down to be near 1.0, and rescale delta_scaled_0 by the same amount     }    delta_scaled_n+1 = delta_scaled_n * twice_reference_orbit_n + delta_scaled_0 // pauldelbrot optimization  }  // rescale delta_scaled to be near 1.0, and rescale delta_scaled_0 by the same amount  // do one iteration assuming twice_reference_orbit is 0  delta_scaled_n+1 = ldexp(delta_scaled_n * delta_scaled_n, scaling_exponent)  + delta_scaled_0}
you could probably even omit the addition of delta_scaled_0 if delta_scaled gets big enough (equivalently if delta_scaled_0 gets small enough): if z escapes the minibrot of the reference then it probably escapes everywhere (other non-tuned minibrot interior nearby to a minibrot is vanishingly small once you zoom deep enough).

I'm not sure what happens if the scaling_exponent exceeds the range of double in the last line - probably you have to go full floatexp for one iteration.

and maybe there will be cases where the reference orbit underflows to 0 in double when it's not exactly 0, so perhaps storing (only) those iterations in a floatexp structure could work out ok - duplicating the whole reference orbit as floatexp would seem to be overkill when only a few points will be problematic.

#### unassigned

• Fractal Phenom
• Posts: 54
##### Re: Scaling doubles
« Reply #6 on: May 01, 2020, 07:22:29 AM »
You are right claude, In the tests I've done the reference orbit seems to underflow quite regularly past about 1e800.

In the reference orbit calculation I check the size of the real part, if this is less than 1e30, I store as floatexp. However, if it is greater, I still store as floatexp but set the exponent to zero. This assumption might not work for all cases however with this I just check the value of the exponent to determine which type of loop to run.

I've been implementing series approximation into the calculations as well and it's working, but not as well as I'd like. I've done some tests with this formula, (up to about e15000) and so far I haven't encountered any problems. The performance is alright (probably can be optimized a lot), I think about 8 times slower at the moment than double precision in my implementation.

#### unassigned

• Fractal Phenom
• Posts: 54
##### Re: Scaling doubles
« Reply #7 on: May 03, 2020, 03:51:26 AM »
I've now optimized my code a little more and done some benchmarking, comparing a double precision version (with series approximation in double precision), to one using floatexp for series approximation and the extended algorithm for the inner loop. The performance of double vs. this method is very comparable (look at the numbers for iteration). I haven't got distance estimation working at the moment which could add some significant performance issues as it probably will require floatexp. Are the numerical methods for D.E. looking at the escape iteration/radius of the pixels and doing a numerical approximation (and how good do these look compared to the analytical inner loop method)?

#### claude

• 3f
• Posts: 1830
##### Re: Scaling doubles
« Reply #8 on: May 03, 2020, 09:48:34 PM »
Are the numerical methods for D.E. looking at the escape iteration/radius of the pixels and doing a numerical approximation
Yes, they compare nearby points numerically, DE  ~= 1 / (log(p) * D(mu)/dC)   for z->z^p + c with mu being the smooth iteration count.

Quote
(and how good do these look compared to the analytical inner loop method)?
If you compare neighbouring pixels it's a bit coarse (this is what KF does at the moment), but you can compute additional points nearer to the pixel (eg 4 points at +/- 1/256 pixel spacing) (I have some FragM code for this if you need it), in the limit of points getting closer it converges to the analytic solution.

### Similar Topics

###### "Time Span"

Started by cricke49 on Fractal Image Gallery

0 Replies
900 Views
August 02, 2018, 07:05:21 AM
by cricke49
###### A new style of fractal imagery using fractal neural style transfer

Started by iRyanBell on Fractal Image Gallery

3 Replies
841 Views
October 03, 2020, 10:50:39 PM
by Jimw338
###### Julia set scaling relative to Mandelbrot set

Started by mfcc64 on Fractal Mathematics And New Theories

10 Replies
570 Views
December 02, 2020, 07:30:55 AM
by mfcc64
###### Put post transform scaling into pre transforms

Started by matigekunstintelligentie on Noob's Corner

0 Replies
124 Views
December 13, 2020, 09:20:37 PM
by matigekunstintelligentie
###### Beautiful fractal scaling in ships

Started by Fraktalist on Share a fractal

0 Replies
609 Views
August 18, 2017, 11:02:14 AM
by Fraktalist