Hello Everyone.

For a long time I have been developing algorithms for realtime viewing and exploration of 3D fractals, and have finally come up with a solution that lets me fully explore various 3D fractals all the way from the outside down to double-precision decimal quality (from a distance of 5 units, down to 5x10

^{-15} units, which would be the equivalent of seeing the nuclei of atoms if the whole fractal was the size of a beach-ball).

Essentially, the heart of the algorithm is splitting the rendering process into two separate parts: raymarching (like normal), and point-cloud rendering.

Instead of trying to raymarch the whole image at once for each frame, it is possible to instead raymarch a random spread of screen pixels, then cache the resulting color data into 3D points, which are then rendered to the screen in the place of a static image. The result is that over a small amount of time, enough points are calculated to result in a fully-detailed image of the area of the fractal that is in front of the camera. But unlike a dynamic resolution algorithm, when you start moving the camera you don't loose any of the calculated detail. The previously calculated points are just rotated and translated as the camera moves, so they are still valid and don't need to be recalculated using the slower raymarching process. For the purposes of maintaining a more solid infill, points are rendered larger than a single pixel, but only update the depth-buffer at the center, which allows for quick blocky infill, but still allows the image to resolve to pixel-quality given enough points.

The overall result then is that you will see initial artifacting that will look like the image is out of focus, but it will quickly resolve to a reasonable quality. Then when moving the camera, parts of the fractal that you have been viewing will continually gain more and more detail, while parts of the fractal that were outside the viewing frustum or occluded by other parts of the fractal will again show momentary artifacting as those parts are resolved.

The whole process is fast enough, though, that it can be run CPU-only and maintain reasonable framerates with several hundred-thousand points. However, I was able to get a significant improvement to framerate (or maintain a similar framerate with around a million points) and infill speed (the speed that the image fills up with points) by offloading the point-cloud rendering to the GPU, which opened up more of the CPU for more raymarching.

I don't know for sure if this is a technique that has not been done before, but I haven't been able to find references to similar algorithms.

But please let me know if this has already been done, as I don't want to just repeat something that has been seen before

.

Also - the algorithm was not specifically designed to generate high-quality images. Rather it was designed to allow full freedom of exploring 3D fractals in realtime. I will still attach some screenshots and videos (if I can) of the program in action, but please note they will not be the most beautiful things on this forum.

This is more for people interested in the idea of being pioneers, diving into the nearly infinite unknown depths of the fractal universe.

Anyway, for anyone who has read this far down, thank you so much for your time. I really appreciate any comments or advice for continued improvements that can be made to the algorithm.

Sincerely

- Luke

Linkback: https://fractalforums.org/fractal-mathematics-and-new-theories/28/algorithm-for-full-exploration-of-3d-fractal-in-realtime/2771/