so while the world turns into the zombie apocalypse, im trying to distract myself by working on my mandelbrot program some more

i was reading a bit about the nyquist sampling theorem and the sinc function for interpolating samples and such things. usually with something like a mandelbrot renderer people program it such that one sample = one pixel. if one wants to do better than that, often they are forced to resort to rendering a large image that way and then downsampling in another program to effectively get more samples per pixel. i always thought that was a major shortcoming, so one of the first features i implemented in my own program was to have controls for settings related to supersampling and then internally render the larger image, downsample, and output the desired final image. i use a typical image processing library for this, which itself i guess implements the usual convolution kernels for such things.

i was thinking though that it would tickle my fancy even more

to not rely on image processing libraries for this, and instead implement something more directly. this is where i started reading stuff about the nyquist sampling theorem, but immediately its all super technical and over my head, and often spoken about in terms of audio stuff it seems. one thing i was able to somewhat relate to was talking about the sinc function, which i kind of recognize from stuff about image convolution kernels.

so if i understand this right, if you take lets say the location of the center of a pixel, around which lets say you have a set of samples at varying distances from this location, is the idea that you could plug the (normalized i guess) distance of a sample into the sinc function, and then use the result to scale the "value" of that sample? then what? take the average of all the scaled values? anything else? i cant seem to find anything clearly showing an example like this...

another thing im not sure about is how exactly one would scale the sinc function. for instance, the central "lobe" should equate to what exactly? the size of a pixel? something more specific than that?

i gather that actually doing all that is considered computationally impractical, though im still interested in the theory at least. with regard to the theory, i think it said that an infinite sinc function would render a "perfect" result. so, if one was to render the "most perfect" result theoretically possible given a set of samples computed for a given image, that means that for each pixel in the rendered image you could extend the sinc function out from the center of that pixel to include all the samples you computed for the whole image, so that they all (sort of) contribute to each pixel?

Linkback: https://fractalforums.org/index.php?topic=3402.0