Neural Fractal Zoom

  • 0 Replies

0 Members and 1 Guest are viewing this topic.

Offline TGower

  • *
  • Fractal Freshman
  • *
  • Posts: 1
« on: January 11, 2019, 06:20:34 PM »
This topic started as a reply to the following:
funny, I just wondered today if it is a feasable method to somehow put the rules of how the mset works (shapestacking, zoom here, always get this form) into a new set of formulas and then calculate the correct patterns, by applying these rules.
it could be a dramatic shortcut for deepzooms. like the impressive deepzooms of dinkydau that are incredibly complex, but in the end extremely simple patterns, just stacked julia sets.
just stacking a few rules would be so much easier than having to start from 0 for every pixel.

I didn't think of neural networks - but that of course must be the way.
train it with countless random zooms (that it generates on the fly) to recognize the patterns and where they are found.
then when it's well trained tell it to generate a very deep pattern based on it.
I am very sure that you will come veryvery close to what the actual deep zoom image would look like.

wow, someone do this, please?!

I've just finished training a neural network in an attempt to do exactly this. Regretfully, the results were quite disappointing. The network converged to a point where regions of smooth color transitions are preserved and zoomed in slightly, but the areas of fractal detail were replaced by an odd repeating hour glass shape of many colors. This could be due to flaws in my dataset, the network architecture I chose, or the very chaotic and hard to predict nature of fractal zooms. You can find some results at The dataset I generated was formed by using XaoS to zoom in on 1000 locations on the main cardiod and 1000 locations on the period 2 disk. I used NVIDIA's open source pix2pixHD architecture with no modifications. The code I created is here:
Code: [Select]
# Create XaoS configuration files for zoom locations

import numpy as np
import math as m

# Number of different zoom locations to use
num_points = 2000
# Zoom level defined by area of picture to render
min_area = 0.0025
starting_area = 4
zoom_steps = 2000

# Zoom locations will be generated from angles (0-360) by both of the following functions. Thanks to Ikmitch on for the boundry point algorithms
def cardiod_point(angle):
    r = (1.0 - m.cos(angle)) / 2.0
    x = r * m.cos(angle) + 0.25
    y = r * m.sin(angle)
    return x,y

def disk_point(angle):
    r = (1.0 - m.cos(angle)) / 2.0
    x = r * m.cos(angle) - 1.0
    y = r * m.sin(angle)
    return x,y

# We will add all zoom locations to a list
locations = []
# Each unique angle will produce two points using the above functions, so we need half as many angles as desired points
for angle in np.linspace(0.0,360.0,num = num_points/2):
    locations += [cardiod_point(angle)]
    locations += [disk_point(angle)]

command_file_dir = "/home/tgower/xaos/xaf/"
pic_file_dir = "/media/tgower/5TB/frac/"
base_name = "mand"
final_zoom_width = "1.31118202E-05" # just ripped from example final window width

max_iter = 1000
images_per_zoom = 2000
frames_per_second = 30
millisecs = int((1000000 / frames_per_second) * images_per_zoom) # images = millisecs / (frames_per_second * milliseconds_in_a_second)
location_num = 0 # Location counter for naming
command_files = [] # list to hold all generated command file names
for location in locations:

# XaoS commands:
#(angle <theta>) rotates the view by <theta> degrees
#(view <x_coord> <y_coord> <x_width> <y_width>) sets the initial view to be centered on (x_coord, y_coord) and the image dimensions on the plane to be (x_width, y_width)
#(morphview <x_coord> <y_coord> <x_width> <y_width>) sets the target view in the format above
#(usleep <milliseconds>) When rendering, the "video" (sequence of images) will last for this long.
# ^ This parameter, in conjunction with framerate (set by render command) determines the number of images generated

    command_file = open(command_file_dir + base_name + str(location_num) + ".xaf", "w")
    command_files += [command_file_dir + base_name + str(location_num) + ".xaf"]
    command_file.write("\n(formula \'mandel)")
    command_file.write("\n(maxiter " + str(max_iter) + ")")
    command_file.write("\n(outcoloring 10)")
    command_file.write("\n(incoloring 10)")
    command_file.write("\n(intcoloring 2)")
    command_file.write("\n(outtcoloring 5)")
    command_file.write("\n(view " + str(location[0]) + " " + str(location[1]) + " 1.0 1.0)")
    command_file.write("\n(morphview " + str(location[0]) + " " + str(location[1]) + " " + final_zoom_width + " " + final_zoom_width + ")")
    command_file.write("\n(usleep " + str(millisecs) +")")
    location_num += 1

# Takes the result of XaoS batch render located at pic_file_dir and puts it in the form required for pix2pixHD

pic_file_dir = "/media/tgower/5TB/frac/"
test_A_dir = "/media/tgower/5TB/frac/test_A/"
test_B_dir = "/media/tgower/5TB/frac/test_B/"
train_A_dir = "/media/tgower/5TB/frac/train_A/"
train_B_dir = "/media/tgower/5TB/frac/train_B/"
command_file_dir = "/home/tgower/xaos/"
command_file = open(command_file_dir + "", "w")

# lazy parallelism of firing off multiple script files
num_threads = 16

command_files = []
for thread in range(num_threads):
    command_files += [open(command_file_dir + "makeDataset" + str(thread) + ".sh", "w")]

for location in range(2,2000):
    location_index = str(location)
    for picture in range(2,1998):
        pic_index = str(picture)
        src_pic_path = pic_file_dir + "mand" + str(location) + pic_index.zfill(4) + ".png "
        pic_name = "mandel_" + location_index.zfill(6) + "_" + pic_index.zfill(6) + "_leftImg8bit.png"
        prev_pic_name = "mandel_" + location_index.zfill(6) + "_" + str(picture-1).zfill(6) + "_leftImg8bit.png"
        command_files[picture % num_threads].write("\ncp " + src_pic_path) # Copy from each png
        if picture < 1800:
            command_files[picture % num_threads].write(train_A_dir + pic_name) # copy to A dir
            command_files[picture % num_threads].write(test_A_dir + pic_name)
        command_files[picture % num_threads].write("\nmv " + src_pic_path)
        if picture < 1800:
            command_files[picture % num_threads].write(train_B_dir + prev_pic_name) # move to B dir
            command_files[picture % num_threads].write(test_B_dir + prev_pic_name)
for thread in range(num_threads):

It is entirely possibly that a better coloring scheme than what I used, or zoom locations that are more interesting (the cardiod and disk zoom location algorithms I used do not always give a point on the "border" resulting in some zooms becoming all the same color after 1300 or so frames) would allow the neural network to create much more interesting and impressive results. If anyone has recommendations for zoom locations, or a sizeable collection of zoom videos that they'd be willing to share, I'd love to give this another shot.

Neural Style Transfer with Fractal Art

Started by reallybigname on Other Artforms

1 Replies
Last post Yesterday at 04:25:41 PM
by reallybigname
First Fractal Zoom

Started by mikoval on Fractal Image Gallery

1 Replies
Last post August 04, 2018, 12:40:03 PM
by Adam Majewski
Fractal zoom: Old dragonslayer

Started by RedshiftRider on Fractal movie gallery

0 Replies
Last post April 08, 2019, 09:43:37 PM
by RedshiftRider
Fractal zoom: Transient transistors

Started by RedshiftRider on Fractal movie gallery

0 Replies
Last post November 15, 2017, 09:53:14 PM
by RedshiftRider
1h 8K Fractal Zoom Project Trailer

Started by mrmath on Fractal movie gallery

0 Replies
Last post March 26, 2018, 12:54:27 AM
by mrmath