Speeding up deep zooming using neural networks / deep learning?

  • 27 Replies
  • 1209 Views

0 Members and 1 Guest are viewing this topic.

Offline Fraktalist

  • *
  • Administrator
  • *******
  • Strange Attractor
  • Posts: 901
« Reply #15 on: October 08, 2017, 10:11:14 PM »
I think what wcould also be a helpful way to train or somehow extract the rules are those deeply zoomed images of the rim of minibrots. like the attached.
you can see the whole previous zoom path in one image. all the information is there. its basically a map of the zoom.
I don't remember how this is called, but seryzone had a way to generate this 'horizontal' map from a minibrot coordinate.



@quazor: I guess most here know your.. more puristic approach to rendering and the use of pertubation method.
I guess it's obvious you won't become happy in this thread.

how about actively disproving the assumption that pertubation method does a good job by rendering some of those deep zooms without pertubation?
rendering the turtle could be a good shallow start.
If it looks different, let's keep talking:
Re: -1.74981156637974861487845249523959962970089619196416457244216420870576524702575061066516404249466289065618622361783274393794148672499161125792408889706689133996471007335925643684886557644449131973678914
Im: 0.00003095548158298538090929473240490752229223863679911042796515092329813429454327632794234283700950411278655998727684754217164273128650648363409559156330678484864992491733993972793601105664930252390019
Zoom: 5.29999999999E179

Offline claude

  • *
  • Fractal Freak
  • **
  • Posts: 663
    • mathr.co.uk
« Reply #16 on: October 09, 2017, 06:18:03 PM »
Quote
I don't remember how this is called, but seryzone had a way to generate this 'horizontal' map from a minibrot coordinate.
This has a few names:
* exponential map
* log transform
* mercator projection
I think the first two names are more accurate than the third, which means something different in cartography.
See: http://www.mrob.com/pub/muency/exponentialmap.html

Offline greentexas

  • *
  • Fractal Phenom
  • ****
  • Posts: 46
« Reply #17 on: October 09, 2017, 10:23:23 PM »
I remember it was something like:

z = z2 + log(z - [whatever your point is]).

Offline claude

  • *
  • Fractal Freak
  • **
  • Posts: 663
    • mathr.co.uk
« Reply #18 on: October 09, 2017, 11:07:27 PM »
if you're talking about "inflection mapping" (the name comes from Kalles Fraktaler feature "show inflection", also my "inflector gadget" borrows the name from there), the formula is:
\[ z \to u (z - v)^2 + v \]
where u and v are constants.  v is the symmetry center point, u is the scaling (hapf said different values may be needed for full generality, perhaps even complex values could work well).  inflector-gadget has u = 1, I may lift this restriction in the next version if there is one.  When doing multiple inflections, you have to do them in reverse order.

Offline greentexas

  • *
  • Fractal Phenom
  • ****
  • Posts: 46
« Reply #19 on: October 09, 2017, 11:09:46 PM »
You are correct, but some variation of the formula I gave gives SeryZone's sidescroller effect. I remember testing it myself.

Offline claude

  • *
  • Fractal Freak
  • **
  • Posts: 663
    • mathr.co.uk
« Reply #20 on: October 09, 2017, 11:19:48 PM »
Oh the exponential map thing.  That is indeed complex logarithm, whose inverse is complex exponential..

\[ c \to \log(c - c_0) \]
gives you long rectangular strip coordinates from regular plane coordinates.  Output has Imaginary part in -pi .. +pi.  Smaller concentric circles around c_0 (the center of the zoom) are vertical lines further to the left.
\[ c \to c_0 + \exp(c) \]
gives you regular plane coordinates from long rectangular strip coordinates.  For input with imaginary part in -pi to pi.

Offline Dinkydau

  • *
  • Fractal Feline
  • **
  • Posts: 159
    • DeviantART gallery
« Reply #21 on: October 12, 2017, 12:44:34 AM »
In order to teach a machine to zoom, any step in an algorithm involving "looking at a render" as humans do all the time, is a step that is best to be eliminated.
Although I don't understand what Claude is doing exactly, this article caught my attention and may be of interest: https://mathr.co.uk/blog/2016-02-25_automated_julia_morphing.html
Claude is using those things called "external rays" to automate choosing a location to zoom in on, so this is something closer to AI than Newton-Raphson zooming.

Ignoring what external rays are (I say something about it in the last paragraph), let me focus on the logic of the implications of this technique. Given is the following:
1. Every minibrot has its own external ray, uniquely characterized by its "angle".
2. There exists an algorithm (known to Claude) to find out where the external ray of a given angle "lands", leading to the location associated minibrot.

As a consequence, if you know the angle of the external ray that lands at a minibrot, there is no visual reference required to find its location. This also means that if there is a relation between those angles such as "add them together for the next morphing", zooming can be done automatically. A zoom path can then be determined by a calculation involving the angles of external rays. Claude has discovered such relationship that works for building a tree.

Borrowing this image from Claude's post as an example of some rays

...what we need to know is a way to find the angle of any minibrot in this julia set, and a well-defined language to express where in the julia set we want to go, so that it's clear for the computer where we want to go. If Claude's algorithm to repeat a julia morphing works in general, zooming could theoretically be fully automated.

When it comes to such language, something as simple as this could work:

If we start in the center of the julia set (let's say the angle of the external ray is known there), we can go further in or further out of the nearby spiral with two arms. Out also means moving towards the spiral with one arm. By "skip" I mean something like jumping over the whole spiral. It's not possible to traverse through the spiral otherwise because it's infinite. With these 3 types of moves, every location in the julia set can be reached.... except those locations that are at one of those super thin lines, but they are inherited from the minibrot by shapestacking, so they're not as imporant. Let's ignore them for a while.

Inspired by regular expressions:
in = further into the large spiral
out = further out of the large spiral
skip = jump over the nearby large spiral
(action1, action2) = either action1 or action2
(action)^n = action n times
(action)* = action as many times as you like (possible 0 times)
The initial move, if in or out, is arbitrary which we will denote INIT:
INIT = (in, out) = either in or out

in this language, the single steps of some of my zoom techniques (often they are repeated) can be written as follows:

Code: [Select]
evolution:
skip

tree:
INIT skip

tree (longer arms):
INIT skip^(armLength)

bent tree:
INIT in skip

x-form (like stardust4ever):
(skip)*

5-armed dragon:
skip in^(5-1) skip

for the following julia morphings we define directionCount as the number of directions in the spiral that the julia set is connected to:

the first type of tiling I discovered:
skip in^(directionCount)

triangle tiling:
(in out^(directionCount - 1))*

spider web tiling:
(in^(2*directionCount+1) skip)*

spider WEB (no straight lines) tiling:
INIT (out in^(directionCount+1) skip)*

Something like this may also give rise to a study of zooming techniques through expressions in the language. I'm not sure anything super fancy can be done because the number of subsequent julia morphings that can be performed is limited to something like 16 by depth and iteration count (or simply put: render time). That's a small number. Even with perturbation, the complexity of current algorithms to render, expressed in the number of julia morphings performed, is double exponential, the worst common complexity listed at wikipedia.

All that is needed to automate at least a single julia morphing defined by an expression in this language (= a location in the julia set = a minibrot) is a way to figure out the corresponding angles (of the external rays landing at those minibrots). If impossible -> time is wasted, lol. I have read some about external rays so I think I have an idea what they are, though it remains a mystery how they are computed. It involves some serious mathematics. If I could compute them easily I would try some things and see if I can find a pattern.

Here's what I get so far:
There is a map called an homeomorphism from C\M to C\D, where C is the complex plane, M is the Mandelbrot set and D is the unit disk without its border (an open set). In a sense the homeomorphism undoes the intricate shape of the Mandelbrot set and reduces it to just a disk.
The ray in C\D with angle w is the set of all complex numbers that have angle (or argument) w. A ray in C\D can be thought of as a straight line that is perpendicular to D and extends to infinity in one direction. By definition it is the image of what we call an external ray in M, under the homeomorphism. Therefore, by definition, the external ray in M associated with the angle w is the inverse image of {all complex numbers with angle w} under this homeomorphism.
An external ray in M is (I guess) a continuous/connected curve. For every rational angle, the external ray in M extends to infinity in one direction and "lands at" (approaches) a point in the other direction which is always where two bulbs (or a bulb and a cardoid) touch each other, or in the cusp of a cardoid. In the latter case I say the external ray belongs to the minibrot whose cardoid it lands at.

So that's a summary of my recent thoughts about how zooming could be made even more computer assisted. I hope it's of any use.

Offline claude

  • *
  • Fractal Freak
  • **
  • Posts: 663
    • mathr.co.uk
« Reply #22 on: October 12, 2017, 03:11:52 PM »
Nice work on the language stuff, it's the logical conclusion of what I was fiddling around with!  Hopefully I'll have some time next month to investigate this more in depth.  One question I hope to answer then is, how could it be extended it to embedded Julia sets with different numbers of spiral arms?

Quote
1. Every minibrot has its own external ray, uniquely characterized by its "angle".

Every hyperbolic component (cardioid-like or disc-like region) has 2 characteristic rays landing on its root point (the period 1 component is special, it has only 1 ray), one on each side of the join (either with the parent component, or the filament connecting to the cusp).  These two rays and the root point divide the complex plane into two halves, one containing the component in question.  The half containing the component is called its "wake".  Nested wakes are the basis of some high level descriptions of components, like (angled) internal addresses.  Each ray has an associated angle measured in turns (so between 0 and 1), most often expressed as a binary expansion.  Hyperbolic components' rays' angles have periodic binary expansions, Misiurewicz points' rays' angles have pre-periodic binary expansions.  Some points' rays' angles are irrational (aperiodic), the most well-known is the Feigenbaum point at the end of the period-doubling cascade of discs.

One of the most exciting facts about external angles is the "tuning" algorithm - if you know the angle of something, and the angles of a particular component, you can "translate" the something to be relative to the component.  Tuning is replacing every 0 by the lower angle and every 1 by the upper angle. For example, the period 2 bulb  has lower angle ".(01)" where the brackets mean repetition, the period 3 island has angles ".(011)" ".(100)", and tuning then gives  the lower angle of the period 6 bulb of the period 3 island as ".(011 100)".

Quote
2. There exists an algorithm (known to Claude) to find out where the external ray of a given angle "lands", leading to the location associated minibrot.

I know of 2 algorithms.

The first is "trace external ray using newton's method, until it gets close enough to use newton's method to find the nucleus".  This takes O(period^2) work (possibly more, as this doesn't take into account the higher precision required for smaller minibrots of higher periods), and can't be parallelized (it's inherently a sequential algorithm).  See http://www.math.titech.ac.jp/~kawahira/programs/mandel-exray.pdf

The second algorithm is the "spider algorithm".  It's much more complicated, and I don't know its eventual asymptotic cost - I should test to see if it is better - but I haven't even implemented it in native precision, let alone arbitrary precision.  However, it might be possible to parallelize it, and given enough processors (ie, period-many, GPU could help...) that might reduce the total time to O(period) (with total work O(period^2))  See http://www.math.cornell.edu/~hubbard/SpidersFinal.pdf

Quote
As a consequence, if you know the angle of the external ray that lands at a minibrot, there is no visual reference required to find its location. This also means that if there is a relation between those angles such as "add them together for the next morphing", zooming can be done automatically. A zoom path can then be determined by a calculation involving the angles of external rays. Claude has discovered such relationship that works for building a tree.

This is true, but the cost of tracing external rays is too high.  As I wrote on the blog post:
Quote
However tracing the external rays to a sufficient depth that Newton's method iterations can find the correct periodic nucleus is asymptotically O(p^2) for period p, and the period is more than doubled each step, so the runtime increases by a factor of more than 4 for each successive location in the sequence. This makes it far too slow to be practical - it's much quicker to do the zooming and point selection by hand/eye (the last in the sequence in the gallery took over 24 hours just to find the location on my machine, dwarfing the time needed to render the actual image).
In a follow up post I postulated that an O(period) method (involving just Newton's method and the periods, without the precise external angles) might be just as good for some patterns:
https://mathr.co.uk/blog/2016-03-05_julia_morphing_symmetry.html
The post shows that the central pattern is the same, while the rings of decorations are different - these were generated by tracing rays, I didn't try implementing the Newton's method based on periods yet, the trouble is finding a good initial guess, maybe rendering a small image with atom-domain colouring could help there?  I suspect the algorithm will end up being O(length of language description * period of final result), which is O(period) if you keep the length fixed.  Again, something to investigate next month when I have more time...

edit I wrote some more in a new thread, because it's nothing to do with neural networks and deep learning... https://fractalforums.org/fractal-mathematics-and-new-theories/28/towards-a-language-for-julia-morphing/493/
« Last Edit: November 02, 2017, 10:51:12 PM by claude, Reason: followup thread »

Offline hapf

  • *
  • Fractal Friend
  • **
  • Posts: 17
« Reply #23 on: November 02, 2017, 11:55:05 AM »
@quazor: I guess most here know your.. more puristic approach to rendering and the use of pertubation method.
I guess it's obvious you won't become happy in this thread.
how about actively disproving the assumption that pertubation method does a good job by rendering some of those deep zooms without pertubation?
Perturbation does not create fantasy fractals in my experience. Overskipping can create interesting fantasy fractals though. Perturbation running out of accuracy creates first noise in the correct image and then blob like melting away of the true structure. Defects are rather easy to see. There is a big difference between numerical accuracy and visual accuracy. The latter also depends on the colouring algorithm.

Offline quaz0r

  • *
  • Fractal Fruit Salad
  • *****
  • Posts: 61
« Reply #24 on: November 02, 2017, 08:07:45 PM »
man you guys need to relax.  if im trolling you youll know it.  i simply meant that as i recall those "animal" images were created by explicitly perturbing the standard mandelbrot iterations.  i never implemented this myself but i think its something like applying some user-set constant to a particular iteration or iteration range early on to "perturb the orbit" or such?   :yes:

Offline Byte11

  • *
  • Fractal Fanatic
  • ***
  • Posts: 33
« Reply #25 on: December 04, 2017, 01:11:19 AM »
With Neural Networks, we can't expect the network to make better guesses than we can (atleast not yet), so giving the network a zoomed out version of an image or a coordinate and telling it to calculate the next frame probably isn't going to work because the mandelbrot constantly surprises even humans.

However, giving a neural network a low-resolution version of an image and telling it to interpolate the points around it might be possible. It could detect common structures in the set and approximate values for the points next to it. That would dramatically reduce computation time. 

Offline Fraktalist

  • *
  • Administrator
  • *******
  • Strange Attractor
  • Posts: 901
« Reply #26 on: December 04, 2017, 10:31:10 AM »
...because the mandelbrot constantly surprises even humans.

However, giving a neural network a low-resolution version of an image and telling it to interpolate the points around it might be possible. It could detect common structures in the set and approximate values for the points next to it. That would dramatically reduce computation time.

If you really understand the rules of how patterns form, the mandelbrot set might still surprise with its beauty, but not with the patterns you see.
See my reply here and the video link in it.

However, giving a neural network a low-resolution version of an image and telling it to interpolate the points around it might be possible. It could detect common structures in the set and approximate values for the points next to it. That would dramatically reduce computation time.
I don't think that will work - unless the network understands the rules that create the patterns - it would just add details that look like other images it has been trained on, like from a lesser zoom depth - so it would miss all the details that form at deeper zoom depths.

Offline Byte11

  • *
  • Fractal Fanatic
  • ***
  • Posts: 33
« Reply #27 on: December 13, 2017, 10:43:46 PM »
I don't think that will work - unless the network understands the rules that create the patterns - it would just add details that look like other images it has been trained on, like from a lesser zoom depth - so it would miss all the details that form at deeper zoom depths.
Yes, I think you'd have to "teach" the neural network about principles like perturbation theory, series approximation and about patterns in the mandelbrot set like minibrots. It would be really difficult, but I think it would be possible.


xx
How to avoid zooming too deep?

Started by noahfence on Mandelbulber

2 Replies
149 Views
Last post June 10, 2018, 02:47:48 AM
by mclarekin
xx
Another possible way to accelerate MB set deep zooming

Started by knighty on Fractal Mathematics And New Theories

170 Replies
4914 Views
Last post August 16, 2018, 12:36:07 AM
by gerrit
xx
Mandelbrot set deep zooming in the web browser

Started by claude on Other

0 Replies
285 Views
Last post October 29, 2017, 11:00:15 PM
by claude
xx
Deep n brassy

Started by timemit on Fractal Image Gallery

0 Replies
180 Views
Last post November 09, 2017, 07:11:35 PM
by timemit
xx
DEEP IN IMAGINATION

Started by VEDES on Fractal Image Gallery

0 Replies
54 Views
Last post March 21, 2018, 08:44:48 AM
by VEDES