"Optimization pipeline: The 0-step-approach"

(just text, result: non-improvement)

The squares the complex plane is tiled into for the TSA are closed, so adjacent squares share an edge or at least a corner. If a gray cell lies next to (also diagonally) a white square, at least the shared corner is in the exterior, so the gray cell definitely has a path to white and can be set to "potentially-white" before ever calculating a bbx. And since the TSA goes over (part of) the image repeatedly, setting a cell at the earliest stage will prevent it from being checked in as many subsequent loops as possible.

I chose 4 sets for a test: the basilica as one of the easiest sets to compute, a parabolic one (slow dynamics), the rabbit as an intermediate and a disconnected one to represent the range of possible sets.

I computed all sets to level 15 in the standard way. The refinement step to level 16 was carried out once with ("preset") and once without ("plain") the 0-step-approach. Total computation time was recorded, as was the total number of bbx's calculated.

The idea of presetting sounded to me like a possible improvement, but turned out to be - none, time-wise even the contrary:

` time to compute (sec) bbx's computed`

plain preset rel% plain preset rel%

basilica c=-1+0*i 31 39 +25% 86 773 694 84 668 326 -2,4%

rabbit c=-0.25+0.75*i 47 55 +14% 303 547 648 296 845 861 -2,2%

parabolic c=-0.75+0*i 237 255 +7% 3 080 708 208 3 078 280 433 -0,1%

exterior c=0.25+0.75*i 29 37 +27% 68 340 679 66 539 010 -2,6%

rel% := (preset-plain)/plain in percent

All sets took considerably longer to finish, even though the number of bbx's computation went down as expected.

I figure since the presetting is proportional to the circumference of the set, the number of gray cells however to the area, in higher refinement levels the relative abundance of presettable squares decreases. And additionally there is the cost of going once over the image - even if no bbx is computed, there's still a lot of memory transfer going on, and since I pass over the image row by row, there will be continuous cache misses.

Nevertheless, I find this presetting conceptually quite appealing: Point-sampling computes the whole orbit ("all-step"), the TSA by Figueiredo computes a bbx ("1-step", and backpropagation) and now there's a "0-step" to judge whole squares in the complex plane (however not to a definite color though). The logical next step would then be a quantum computer that knows the answer before I decided what to compute

I may still follow this idea and interweave the presetting with loading the raw data, as I have every row in memory there at one point anyways, so no additional memory transfer cost. Maybe that would lead to a (slim) speed improvement.

(Note: I use the terms square/cell/tile interchangeably, sometimes also pixel, as well as the terms raw data/image/2-square).