Perturbation theory

  • 206 Replies
  • 4933 Views

0 Members and 1 Guest are viewing this topic.

Offline knighty

  • *
  • Fractal Friar
  • *
  • Posts: 140
« Reply #135 on: January 09, 2018, 09:48:12 AM »
what is the alternative?  i did some tests once, and you almost always need the extra exponent, even at shallow depths..  maybe once in a while you can get away with native precision at very shallow depth and very few terms, but then you would need to implement cumbersome checks to make sure..
even if claude's correction method doesnt hold, why not still at least take a similar approach as his for the selection of reference points, instead of pure random?  ie, select a point with the smallest |Z|
and for that matter, could you not also continue from a last known "good" iteration before any glitches triggered, similar to claude's method, instead of starting over at the SA iteration?  in practice, the implementation of this might be more screwing around than its worth... then again, you can likely always reach a point where any such screwing around becomes worth it...

Claude answered better than I could do :D. Yes you can do a glitch correction that is more involved and that goes faster at the price of a much more complicated code. Anyway, IMHO, it is possible to reduce the per pixel computation to a minimum, by squeezing the SA possibilities to their limits ;) , to the point the Glitch detection and correction overhead becomes negligible (if not removed at all).

Just a precision about the scaling in superMB. I'm using long doubles only which simplifies things a lot. The SA uses a polynomial:
Psa(t) = sum(ai ti)
When iterating, the coefficients grow (maybe) exponentially. The idea is to use a scaled polynomial instead:

SP(T) = SP(scl t) = Psa(t) = sum(ai ti) = sum(ai scl-i scli ti)
SP(T) = sum((ai scl-i) Ti) = sum(bi Ti)

Offline gerrit

  • *
  • 3e
  • *****
  • Posts: 1047
« Reply #136 on: January 09, 2018, 06:24:44 PM »
Those dots are normal. I was talking about having some pixels wrongly drawn in white hue like in the picture below. It must come from the handling of the iterate when it escapes. It needs some investigation.

Offline knighty

  • *
  • Fractal Friar
  • *
  • Posts: 140
« Reply #137 on: January 09, 2018, 09:48:19 PM »
Ok! That means a tolerance of 0.001 is not always good for gerritB method.  :(
The white dot I was talking about are just because maxiter was not sufficient. The max iteration number used for glitch corrected pixels is not reported in the info zone.

Nice. Do you switch from SA to full precision when the SA runs out?

Yes.

Offline gerrit

  • *
  • 3e
  • *****
  • Posts: 1047
« Reply #138 on: January 09, 2018, 10:50:15 PM »
Ok! That means a tolerance of 0.001 is not always good for gerritB method.  :(
No, gerritB and PdB were OK with that tolerance, but none of the other methods (shown in annotated picture).
If I knew how even in principle to "take all errors into account" as you wrote I'd try it even though it may not be practical.
I tried a backwards error analysis on derivative z' leading so a similar condition for that, but on all examples I tried that error was always much smaller (by like 1e16) than the error in perturbed orbit.

Offline knighty

  • *
  • Fractal Friar
  • *
  • Posts: 140
« Reply #139 on: January 10, 2018, 01:15:59 PM »
PdB was OK for me too but i got some glitched pixels with gerritB at 0.001 tolerance. The other methods need a lower tolerances. Usually 0.0001 but in that particular case I had to lower it to 0.00001.

If I knew how even in principle to "take all errors into account" as you wrote I'd try it even though it may not be practical.
I tried a backwards error analysis on derivative z' leading so a similar condition for that, but on all examples I tried that error was always much smaller (by like 1e16) than the error in perturbed orbit.

We can begin by taking into account the rounding/cancellation errors when computing the initial value from the SA. Taking into account the truncation error is not, IMHO, a good idea because it is a different kind of error and that may require to lower the SA tolerance to a sufficiently small value making it way too conservative.

Offline gerrit

  • *
  • 3e
  • *****
  • Posts: 1047
« Reply #140 on: January 10, 2018, 06:07:44 PM »
PdB was OK for me too but i got some glitched pixels with gerritB at 0.001 tolerance. The other methods need a lower tolerances. Usually 0.0001 but in that particular case I had to lower it to 0.00001.

We can begin by taking into account the rounding/cancellation errors when computing the initial value from the SA. Taking into account the truncation error is not, IMHO, a good idea because it is a different kind of error and that may require to lower the SA tolerance to a sufficiently small value making it way too conservative.
Thanks for that insight.

Am I correct in thinking that smb can now run without SA at all by setting skip=0?

I tried the previous glitchy example with skip=0: now Kn KnPdb Gerrit and GerritB all render glitch free with the theoretical value tol=1!

The derivations of Kn/Gerrit assume no SA, so maybe they are correct after all without any parameter tweaking and it's the additional injection of SA errors into the perturbed variables (when the actual iterations (not-SA) start) that is the missing factor that has not been taken into account, which is what you wrote if I understand correctly.

I've completely ignored SA till now, but now would like to understand the state of the art. Is part I of http://www.fractalforums.com/announcements-and-news/*continued*-superfractalthing-arbitrary-precision-mandelbrot-set-rendering-in-ja/msg91505/#msg91505 still the state of the art in SA error control? If you have any other pointers they would be greatly appreciated.

Note added: The oldbaid "hard" one also renders fine with tol=1 for gerritB only (with SA off).
« Last Edit: January 10, 2018, 08:11:45 PM by gerrit »

Offline gerrit

  • *
  • 3e
  • *****
  • Posts: 1047
« Reply #141 on: January 11, 2018, 04:32:16 AM »
Using smb on many locations I've found that GerritB (summing all errors each step) with the theoretical tolerance \( \Delta w/\Delta w' < h \) with \( h \) pixel radius, resulting from assuming \( |\Delta w|=|w|\epsilon \) with \( \epsilon \) machine precision "always" works, provided m, the number of iteration to skip, is sufficiently (unrealistically for practical purposed) small.

My thinking is that if m is not "sufficiently small", we have \( \epsilon > "machinePrecision" \), and all we have to do is figure out the correct \( \epsilon \) from an SA error analysis.  The default value in the latest smb (\( \epsilon = 1000*machPrec \)) could then be derived and not be a user-tweaked parameter.

If that works, and putting everything together, we should then be able to find the optimal \( m \), balancing the cost of glitch correction with the speedup provided by the SA.

If that would lead to anything practical is unclear, but my main interest is to put the whole procedure on solid theoretical foundation. There are many examples in numerical analysis where theoretical "correct" algorithms get tweaked in practice to give something of practical use, though not 100% correct.

Comments would be appreciated.


Offline claude

  • *
  • Fractal Frankfurter
  • *
  • Posts: 589
    • mathr.co.uk
« Reply #142 on: January 11, 2018, 05:34:49 AM »
Quote from: gerrit
put the whole procedure on solid theoretical foundation
this is an admirable goal, good luck with it!

regarding errors from SA:

here's knighty deriving per-pixel test for accuracy of series approximation
here's knighty deriving formula for series approximation truncation error
here's knighty showing that it works (with images)

this stuff could be what you need to initialize the per-pixel error sum?

Offline gerrit

  • *
  • 3e
  • *****
  • Posts: 1047
« Reply #143 on: January 11, 2018, 07:30:27 AM »
this is an admirable goal, good luck with it!
regarding errors from SA:

Thanks for those links, I will have fun digging into the math, I've found out for holomorphic functions there are more accurate truncation error estimates, involving contour integrals. May be a dead end, I suspect knighty already looked into that.

Truncation error in SA  is one thing, rounding errors in the SA coefficients another (never considered AFAIK ), maybe if we sum them we get the right error in the perturbed orbit when we start the actual iteration (after the SA shortcut), and we get a glitch condition without need to set some arbitrary tolerance, by simply using the current "gerrit" formula (which I'd rather call knightyV2 as all I did is slightly improve on his groundbreaking idea) with the right \( \epsilon \) which could turn out to be as simple as \( \epsilon ={\rm rounding\ error}/{errGlitch}  \). If (big if) that works we then have a sound theoretical way to get it right, and then it's time to cut corners to get something close that is practical and more or less correct.

Not directly related, how does one decide on the value of m=order of SA polynomial right now, say in KF? Haven't seen anything on that.



Offline claude

  • *
  • Fractal Frankfurter
  • *
  • Posts: 589
    • mathr.co.uk
« Reply #144 on: January 11, 2018, 09:21:49 AM »
Quote
how does one decide on the value of m=order of SA polynomial

KF bases it on the image size.  Bigger -> more terms.  Not sure if it's the total size or the number of remaining pixels.  I added a clamp to 64 iirc to avoid some overskipping at Karl's recommendation, but I don't think it's perfect.

Offline knighty

  • *
  • Fractal Friar
  • *
  • Posts: 140
« Reply #145 on: January 11, 2018, 01:13:49 PM »
In superMB the number of skipped iterations is slightly reduced when using higher resolutions. This is because the tests depend on the pixel size. There is IMHO no reason to make the SA number of terms depend on the resolution. the only criteria are the speed of computation and error. Both are related as the complexity and per iteration errors magnitude are O(n?) where n is the number of terms. The optimal number of terms is about 32 - 64 in most cases.

Thanks for those links, I will have fun digging into the math, I've found out for holomorphic functions there are more accurate truncation error estimates, involving contour integrals. May be a dead end, I suspect knighty already looked into that.

Truncation error in SA  is one thing, rounding errors in the SA coefficients another (never considered AFAIK ), maybe if we sum them we get the right error in the perturbed orbit when we start the actual iteration (after the SA shortcut), and we get a glitch condition without need to set some arbitrary tolerance, by simply using the current "gerrit" formula (which I'd rather call knightyV2 as all I did is slightly improve on his groundbreaking idea) with the right \( \epsilon \) which could turn out to be as simple as \( \epsilon ={\rm rounding\ error}/{errGlitch}  \). If (big if) that works we then have a sound theoretical way to get it right, and then it's time to cut corners to get something close that is practical and more or less correct.

I never considered using truncation error estimation using contour integrals, slightly complicated for me  ;D. It would be cool to see what it gives.
The effects of truncation error are mainly smooth deformations which are usually not very objectionable especially when restricted to one pixel radius or less. The rounding errors are more problematic. They give noise and/or blocky glitches. The more terms and iterations the more likely it will happen. There are some renderings by Claude showing that effect in this thread. There are two kinds of rounding/cancellation errors in SA : in the computation of the coefficients and in the evaluation of The SA for per pixel processing. I mean that even if there are no rounding errors in the SA coefficients, it is possible to have them in the evaluation of the SA polynomial. It is of course possible to estimate those rounding/cancellation errors in the same way (Ok! more complicated) as for glitch detection. An in depth analysis of all this stuff would be a good subject for a numerical math graduate student. ;o)

Am I correct in thinking that smb can now run without SA at all by setting skip=0?

I tried the previous glitchy example with skip=0: now Kn KnPdb Gerrit and GerritB all render glitch free with the theoretical value tol=1!

Super hyper cool!  :horsie: :music: :horsie:

When setting skip=0 smb will report 2 iterations. The first iteration is because we actually begin at iteration 1 (z is set equal to c at initialisation). The second is because the step function of series_approximation class is called at least once.

Using smb on many locations I've found that GerritB (summing all errors each step) with the theoretical tolerance \( \Delta w/\Delta w' < h \) with \( h \) pixel radius, resulting from assuming \( |\Delta w|=|w|\epsilon \) with \( \epsilon \) machine precision "always" works, provided m, the number of iteration to skip, is sufficiently (unrealistically for practical purposed) small.
When using a small number of skipped iterations, most of coefficients remain very small until a period is attained. I guess that one could use a skip=smallest_period and a glitch detection tolerance of 1 without problem. That's just a guess... + tried it in the "amp4glitch" location: the first period is at 257 iterations. setting skip=257 gives correct render with tol.=1. If skip=258, tol. have to be set to 0.01 to get correct result.

BTW: Now superMB finds automatically the maximal number of skipping possible for the "light years away" location. (313084 with 64 coefficients)
Here is a picture of Dinkydau's "triangle tiling" location. it took about 25 min at 2560x1440 resolution to render (downsampled to 640x360).

Offline hapf

  • *
  • Fractal Friend
  • **
  • Posts: 17
« Reply #146 on: January 12, 2018, 03:12:28 PM »
BTW: Now superMB finds automatically the maximal number of skipping possible for the "light years away" location. (313084 with 64 coefficients)
You are missing a cipher in 313084, no? I get 3115200 with 64 long double coeffs.

Offline knighty

  • *
  • Fractal Friar
  • *
  • Posts: 140
« Reply #147 on: January 12, 2018, 05:52:11 PM »
No! no digit missed. The number is for "light years away" location. See attachment.

Offline gerrit

  • *
  • 3e
  • *****
  • Posts: 1047
« Reply #148 on: January 13, 2018, 03:21:09 AM »
I never considered using truncation error estimation using contour integrals, slightly complicated for me  ;D. It would be cool to see what it gives.
I looked into contour integrals and get a strange result. I can't find my error, so let me state the result and please let me know if from your practical experience this is obviously wrong. If so I'll keep looking for my error (a bit hard to type in the formulas and they may be wrong anyways).

Consider the SA in a ball of radius r around the reference point, using M terms, skipping N iterations. If N is less than the minimum number of iterations to compute a ball of radius \( 2r \) the truncation error R (the error in the perturbed variable) in the SA is bounded by \( R<4/2^M \).

??

Offline knighty

  • *
  • Fractal Friar
  • *
  • Posts: 140
« Reply #149 on: January 13, 2018, 01:57:37 PM »
If N is less than the minimum number of iterations to compute a ball of radius \( 2r \) the truncation error R (the error in the perturbed variable) in the SA is bounded by \( R<4/2^M \).
I'm not sure I understand.
It follows directly from the remainder definition that if you have a truncation error of R for a ball of radius r, the truncation error at r/2 is less than R/2m or something like that. (m is the number of terms of the TE).

If I understand correctly, the evaluation of the truncation error can be computed by numerical integration on the boundary of the rendered rectangle. It would indeed be a much better estimation than the one based on ball interval arithmetic. One have to be careful though: The SA polynomial is not exactly a Taylor expansion. The last term doesn't coincide.