Speeding up deep zooming using neural networks / deep learning?

  • 27 Replies
  • 1160 Views

0 Members and 1 Guest are viewing this topic.

Offline greentexas

  • *
  • Fractal Phenom
  • ****
  • Posts: 46
« on: October 03, 2017, 11:32:30 PM »
After you get some experience zooming into the Mandelbrot set, you will be able to predict what will happen if you zoom into a given location. Is it possible for a computer to reconstruct a zoomed location in the Mandelbrot set by taking the zoomed-out versions, or by using Julia sets?

Offline Fraktalist

  • *
  • Administrator
  • *******
  • Strange Attractor
  • Posts: 874
« Reply #1 on: October 03, 2017, 11:41:29 PM »
funny, I just wondered today if it is a feasable method to somehow put the rules of how the mset works (shapestacking, zoom here, always get this form) into a new set of formulas and then calculate the correct patterns, by applying these rules.
it could be a dramatic shortcut for deepzooms. like the impressive deepzooms of dinkydau that are incredibly complex, but in the end extremely simple patterns, just stacked julia sets.
just stacking a few rules would be so much easier than having to start from 0 for every pixel.

I didn't think of neural networks - but that of course must be the way.
train it with countless random zooms (that it generates on the fly) to recognize the patterns and where they are found.
then when it's well trained tell it to generate a very deep pattern based on it.
I am very sure that you will come veryvery close to what the actual deep zoom image would look like.

wow, someone do this, please?!

Offline hobold

  • *
  • Fractal Fanatic
  • ***
  • Posts: 39
« Reply #2 on: October 04, 2017, 07:58:24 AM »
The results would not actually be zooms of the Mandelbrot set. Neural networks are not intelligent; their name is mostly clever marketing. Neural Networks are very advanced statistics. One can think of them as a clever combination of three things:

1. an approximating interpolation and extrapolation of a set of data points in very high dimensional space (the "trainig set")
2. an extremely highly compressed lookup table of interpolated/extrapolated data in very high dimensional space (the coefficients/weights after training)
3. a reasonably reliable way to initialize said lookup table in bearably short time (the "training")

Neural networks are remarkable, and can do remarkable things. But at the end of the day, they are still very much like guesses of the future based on past data. They can only be (statistically!) trusted for input data in regions of interpolation ("past"), but in regions of extrapolation ("future"), there is no reason why they should be better than any other statistical guess (and certainly no guarantee).


In the case of a "deep zooming neural network" what you would get is an automated impressionist painter. That painter would be extremely good at whipping up plausible and credible impressions of deep zoom Mandelbrot images. But that painter would only be guessing if you asked him to paint anything much more deeply zoomed than what he saw during initial training.

In a very real sense, that painter network could never surprise you like the real Mandelbrot set can. It would really only repeat itself, although from a fairly large gamut of trained works.


Don't get me wrong: such a neural network would be very interesting from an artistic point of view! But it would be no help exploring the Mandelbrot set in places "where no one has gone before".

Offline Fraktalist

  • *
  • Administrator
  • *******
  • Strange Attractor
  • Posts: 874
« Reply #3 on: October 04, 2017, 11:53:02 PM »
I am not sure hobold. I've spent so much time zooming into the mset and have gotten quite a good grasp of what is happening.
In a nutshell: The Mandelbrot Set is 100% deterministic. The image you find at deeper zoom depth is 100% the result of your choices in the zoom path. Known as shapestacking.. https://www.youtube.com/watch?v=Ojhgwq6t28Y
If you think this through it means that you can predict how a future image will look like.
how deep you go into a valley influences the degree how curved the spiral is. the number of certain features you pass is mirrored later in bifurcation. the number of branching tips you choose at the start will be embedded in all of your future zoom path.

So there obviously are fixed rules that you can learn. If I can do that with experience, than a neural network will be able to do the same (if not now then in the near future). And contrary to me, a neural network will be able to create a proper image.
You are right, it might not be 100% pixel per pixel exact, but it will be very very close - close enough to "fool" anyone looking at it.
and it will generate the basic patterns and features correctly, even for zoom depth that are currently totally out of reach. e100.000 and far more.
you could even tell the neural network to generate a julia set (and maybe find the correct coordinates?) with certain, correct patterns.

if a computer can learn to properly play jazz music, then it will have an "easy" go at a mathematical object with simple rules.



Offline hobold

  • *
  • Fractal Fanatic
  • ***
  • Posts: 39
« Reply #4 on: October 05, 2017, 09:36:52 AM »
The Mandelbrot set isn't just deterministic. It is "deterministic chaos". That may seem contradictory, because chaos is usually understood to imply unpredictability, whereas determinism is understood to imply complete and perfect predictability.

The reason why this is not a contradiction is this: "prediction" is expected to be a shortcut of sorts. The purpose of prediction is to gain knowledge with less effort than actually letting events unfold. In our case, the purpose of prediction would be to obtain a deeply zoomed Mandelbrot image, but with less effort than actually computing iterations for all pixels. Instead, we want to predict the colors of those pixels.

CLARIFICATION: I am not talking about numerical optimizations here. The perturbation method, for example, is not a prediction. Under the right conditions (which are currently not fully understood, but close enough in practice), the results of the perturbation method enjoy a mathematical guarantee to be the same as brute force calculation. Speed is gained because the perturbation method allows using the computer's highly optimized circuits for floating point arithmetic, while brute force calculation of deep zooms requires software subroutines for a less efficient "BigNum" data format.

This is the heart of "deterministic chaos". Everything is fully determined, but there are no shortcuts. If you want the true result, you have to do all the math, with no shortcuts. And "all the math" is not merely the formula "z*z + c"; it is a potentially infinite number of iterations of that formula, which potentially never approaches a periodic cycle. It is determined, yes. But a table of all values would still be infinitely big, with no shortcuts, no repetitions.


Alternatively, I have another argument for you which is a little less formal, but still abstract.

That 2nd argument is based on an informal idea of "information". The amount of information you gain does not strictly correlate with the amount of data you receive. For example if instead of the former sentence, I had written a long string "aaaaa...." with the same number of letters, that would not have been as informative (at least I should hope so :) ).

In an informal sense, you gain the more information, the more you are surprised by the incoming data stream. When things are too predictable, they quickly become boring (this is true not just for the scripts of Hollywood movies).

That sets up my punchline: the only reason you are still interested in seeing ever more ever deeper zooms into the Mandelbrot set is because you are not bored. You continue being informed by those images. There are yet more surprises for you there. And that is despite your claim that you already know your way down there in the infinite details of the set's boundary. That is despite you using the most complex and most capable neural network known to us: a human brain.

Is it possible to build a specialized neural network, one that can beat humans at learning deep zoom Mandelbrot shapes? Sure, eventually, or even today already. But those will be finite, too. Those will eventually zoom deeper than their own "experience" and then continue to be surprised.

Chaos is not so easily defeated by puny mortals. Or by unnecessarily luxurious computing machinery, for that matter. ;)

Offline Fraktalist

  • *
  • Administrator
  • *******
  • Strange Attractor
  • Posts: 874
« Reply #5 on: October 05, 2017, 03:13:40 PM »
hehe :)
good discussion.
I'm aware of deterministic chaos but indeed find it a concept hard to grasp, not very intuitive..

Ok, I agree, prediction becomes impossible for any deep points when your just given two coordinates.

But what if you don't actually use coordinates as a system to 'zoom' but the rules that are inherent to the mandelbrot-sets patterns? Those we call shapestacking..

I am no longer surprised of what I see in the mset. (I am surprised by the incredible amount of dedication and time though, that people use to create 'new' images, with stacked shapes, never seen before.)
There are strict limits of what can happen in the mandelbrot set - despite no one has ever zoomed infinitely deep - the basic rules stay the same, no matter how deep you zoom.


So if you use these rules to 'branch-swing' from actually possible results to the next, building up on just the previous pattern, instead of having to go through billions of iterations for each pixel starting from zero, you would get the same image.

These are rules that can be learned. And I am sure that if you put these rules together in the right way, you could create an image out of this.
And you could do this without the need of coordinates.
It's kind of like using waypoints, landmarks to describe the way to a destination, instead of just giving the google maps coordinates. A different system that has little to do with the other.
I could write a lot to try to explain this, but I think this image from stardust4ever says it best:

the rule to add one more x is simple. but the actual zooming you would have to do would double the calculation time.

I am convinced there is a shortcut. I loosely grasp it and so do most deepzomers and everyone who deliberately uses shapestacking. If our intuition can learn this, deep learning can too.
It is not yet clearly described, but I think that deep learning is probably the best way to get there.


« Last Edit: October 05, 2017, 03:26:36 PM by Frank Fraktalist »

Offline greentexas

  • *
  • Fractal Phenom
  • ****
  • Posts: 46
« Reply #6 on: October 05, 2017, 06:25:56 PM »
Maybe the computer could use automatic shapestacking, so if you zoom towards a Minibrot, the calculation time can be decreased. The computer will know how many Julias to put there.

Offline claude

  • *
  • Fractal Frankfurter
  • *
  • Posts: 613
    • mathr.co.uk
« Reply #7 on: October 05, 2017, 06:44:12 PM »
You can prototype some kinds of Julia morphing with my Inflector Gadget:
https://mathr.co.uk/blog/2017-03-21_inflector_gadget_v0.2.html
https://mathr.co.uk/blog/2017-02-13_inflector_gadget.html
Definitely not the same as deep zooming, of course, but much much quicker.

Offline hobold

  • *
  • Fractal Fanatic
  • ***
  • Posts: 39
« Reply #8 on: October 06, 2017, 03:35:15 AM »
I am convinced there is a shortcut. I loosely grasp it and so do most deepzomers and everyone who deliberately uses shapestacking. If our intuition can learn this, deep learning can too.
You are making the assumption that all deep zoom shapes can be reached by shapestacking.
You are making the assumption that shapestacking as a concept can be defined much more precisely than our intuitive notion of it.

I would not know where to start if I had to prove either assumption. But that only says something about my lack of ingenuity, not about the impossibility of such a task. :)

Still, where do the shapes come from that we are shapestacking? Is there a starting point? Are there infinitely many starting points? Do "new" starting shapes keep appearing at deeper and deeper zoom levels?

Offline v

  • *
  • Fractal Fanatic
  • ***
  • Posts: 25
« Reply #9 on: October 08, 2017, 09:01:00 AM »
I am convinced there is a shortcut. I loosely grasp it and so do most deepzomers and everyone who deliberately uses shapestacking.

I believe it.  Having experimented with logarithm fractals I've seen similar patterns of same sorts of shapes multiplying/dividing.  It is also obvious in the evolution of the multibrot to higher orders, where you see a multiplication of the initial shape, which is a form of exponentiation (inverse of logarithm), and intuitively, zooming in can be seen as taking a logarithm of sorts.  Super deep zooms that are orders of magnitude larger than their previous zoom levels manifest as sorts of mere integer multiples of shapes, just like the logarithm and exponentiation function.

If anyone could put the cart before the horse, so to speak, and formalize these seemingly obvious observations, the implications could be groundbreaking.

Offline Fraktalist

  • *
  • Administrator
  • *******
  • Strange Attractor
  • Posts: 874
« Reply #10 on: October 08, 2017, 01:40:54 PM »
You are making the assumption that all deep zoom shapes can be reached by shapestacking.
You are making the assumption that shapestacking as a concept can be defined much more precisely than our intuitive notion of it.

I would not know where to start if I had to prove either assumption. But that only says something about my lack of ingenuity, not about the impossibility of such a task. :)

Still, where do the shapes come from that we are shapestacking? Is there a starting point? Are there infinitely many starting points? Do "new" starting shapes keep appearing at deeper and deeper zoom levels?

Well basically, zooming into the Mandelbrot-Set is nothing but Shapestacking.
It doesn't matter if you zoom randomly or deliberately go for a form - every tiny decision where you zoom will be embedded in the future zoom path. actually it creates the future zoom path. you have seen this videoI guess?
Can I prove that in a scientific way? no. probably not. but i think it is possible.
Is it true? definitely. Ask anyone who understands how shapestacking works and he will confirm this. Or even better, go try it for yourself.
there are no really new shapes deeper in that are not deliberately created by zooming in a special way.
http://www.fractalforums.com/kalles-fraktaler-gallery/horse/
http://www.fractalforums.com/kalles-fraktaler-gallery/goat/
http://www.fractalforums.com/kalles-fraktaler-gallery/turtle/

you can make these as complex as you want, or better: as complex as your cpu time allows.


greentexas: I don't see how automatic shapestacking would change anything, you still have to iterate each pixel completely. once you zoom towards a minibrot, shapestacking doesn't really happen anymore. once you decide to go for the bifurcation it's only doubling of existing shapes.
or do you mean one could use those extracted rules to find the coordinates of a desired pattern and then you only have to actually calculate the final image instead of each step on the zoom path?


i think claudes inflector gadget is a first step, or one part of the ruleset - the bifurcation level..

Let me think, what are the basic rules I know...?
1st level: number of branches (matches the period of bulbs on the main cardiod)
2nd level: repetition, lenghtening a branch (by zooming towards points further away from the main bulb)
2nd level: curvature of branches/spirals (the deeper into the valley, the more curvature)
3rd level: bifurcation(shortcut to shapestacking complex zooms - where is that thread, I think dinkydau came up with a method) - mirroring around the symetry axis that you choose as your new mid-point in each bifurcation"area"

and then repeat the same 3 levels in each next minibrot level.

that's basically it. I just notice that I've never seen them written out like this before, condensed to the most basic rules- wording can be improved though.
anything I missed?

edit: this seems important enough to get a dedicated thread
https://fractalforums.org/fractal-mathematics-and-new-theories/28/the-basic-rules-of-the-mandelbrot-set/412
« Last Edit: October 08, 2017, 02:49:05 PM by Frank Fraktalist »

Offline Sockratease

  • *
  • Fractal Frankfurter
  • *
  • Posts: 541
    • Sockratease.com

Offline hobold

  • *
  • Fractal Fanatic
  • ***
  • Posts: 39
« Reply #12 on: October 08, 2017, 07:38:36 PM »
If anyone could put the cart before the horse, so to speak, and formalize these seemingly obvious observations, the implications could be groundbreaking.
That is the reason why I am asking my impertinent questions. An understanding of the Mandelbrot set that is detached from the low level iterations, but still formally correct and complete, would indeed break new ground.

Certainly a goal worthy of pursuing. But if we are serious about reaching that lofty destination, we should stop dreaming about it. Instead, let's identify possible roadblocks and showstoppers. Then we tackle those one by one.

Offline hobold

  • *
  • Fractal Fanatic
  • ***
  • Posts: 39
« Reply #13 on: October 08, 2017, 07:52:24 PM »
Well basically, zooming into the Mandelbrot-Set is nothing but Shapestacking.
Let's see which basic starting shapes we can identify.

1. The overall shape, i.e. a cardioid with attached disks. This means the main 'brot and all minibrots.
2. Spiral centers. Sure, there may be infinitely many with different integral number N of "arms", but all those spirals are essentially a single family with parameter N.

These are a countable infinity of starting shapes.

3. Embedded Julias. Some are dust, some are made of spirals, some are made of solid lumps which are not cardioids.

There are uncountably many different Julia sets, but they are usually closely related when their respective constant parameters are close.

Now here is the big question:

If we forget the origin from the simple z*z + c, can we describe the whole gamut of 1, 2, and 3 (and 4: everything I forgot) with a finite amount of information? Because that is what we need for an alternate generator algorithm (it is an irrelevant implementation detail if that's a neural network or something else).

Offline quaz0r

  • *
  • Fractal Phenom
  • ****
  • Posts: 53
« Reply #14 on: October 08, 2017, 09:38:12 PM »
is it worth pointing out here that kalle's "animal" patterns are not in fact mandelbrot images, but perturbed mandelbrot images, as i recall


xx
How to avoid zooming too deep?

Started by noahfence on Mandelbulber

2 Replies
122 Views
Last post June 10, 2018, 02:47:48 AM
by mclarekin
xx
Another possible way to accelerate MB set deep zooming

Started by knighty on Fractal Mathematics And New Theories

155 Replies
3947 Views
Last post Today at 12:52:28 AM
by gerrit
xx
Mandelbrot set deep zooming in the web browser

Started by claude on Other

0 Replies
280 Views
Last post October 29, 2017, 11:00:15 PM
by claude
xx
Deep n brassy

Started by timemit on Fractal Image Gallery

0 Replies
176 Views
Last post November 09, 2017, 07:11:35 PM
by timemit
xx
DEEP IN IMAGINATION

Started by VEDES on Fractal Image Gallery

0 Replies
52 Views
Last post March 21, 2018, 08:44:48 AM
by VEDES