So... you create "outline" polygons of the shape to a certain resolution and then using a bounds check on the polygon instead of the actual formula. Quite ingenious even if a bit error-prone. Congrats on having some success with it!
Thanks!
I'm currently working on speeding the construction up. It takes about one whole day of computation to derive the polygons, and the outside one quite often makes some loops which I have to remove manually. I hope I can get rid of that to completely automate this process.
...even if a bit error-prone.
What do you refer to exactly by "error-prone"?
The algorithm's answers "inside" and "outside" come with a mathematical guarantee because of the underlying calculation process: cell mapping and interval arithmetics, so a point in the image is actually a whole square and its color is applicable to the fate of every number therein (provided my implementation is correct, of course). So there's no error in the sense that a user-entered number is falsely classified as interior or exterior.
Or do you mean that there is always a (not so tiny) set of complex numbers that cannot be judged and the algorithm's output is "unknown"? That's unfortunately the price one has to pay to be mathematically certain in the other two cases.
Relevant to the algorithm's output is "outside the outside polygon" (currently I'm working only with connected shapes so I only need one outside polygon) and "inside one inside polygon", all other point check results lead to the answer "unknown".
Have you checked the speed of this method vs just performing the math calculations outright?
I guess you refer to distance estimation? Since this method uses some sort of arbitrary parameter (large enough radius) and is mathematically only true in the limit at infinity its calculation would not be certain in the Figueiredo sense, falsely classified numbers will come up - but could not be detected (and I haven't found a reference as to how big the error of distance estimation is).
The point test itself is quite slow if checking a large set of numbers (it took 15 minutes to check about a quarter million points). But that could be sped up using a better data structure - I simply use an array and walk through it again and again. A tree sorted by x-coordinate or a hash-table would greatly reduce that time. And, of course, parallelization is possible as checking different numbers is independent from each other.
From a practical point of view one could always use the trustworthily computed Julia set image itself and simply calculate in which pixel the complex number of interest lies and then check its color (so direct array access - nothing beats that!). For the images I currently can compute polygons for (2^14 x 2^14 pixels) that would only need about 16 MBytes (assuming 4 pixels per byte), so not a problem at all.
But from a theoretical point of view I like the polygons better - and why take the easy road if there's an intersting detour?
