OpenCL Netrender: GPU/CPU

  • 6 Replies

0 Members and 1 Guest are viewing this topic.

Offline greytery

  • *
  • Fractal Freshman
  • *
  • Posts: 3
« on: May 31, 2019, 07:47:40 PM »
Coming to this a bit late, so perhaps heterogeneous OpenCL GPU/CPU netrender has already been implemented...?  This is discussed/promised as Github issue:"Netrender support for gpu rendering #529"
Agree with @zebastian : whole images to be rendered by each machine, but only applies to animation jobs.  For a single image, there's no option but to farm out the work as per current netrender - but using tiles, of course.   The current line-based work allocation has poor efficiency. From observation, the client gets a chunk of work, sweats its CPUs for ~sec and then snoozes for ~sec while the server (presumably) updates the working image, and allocates the next line = lots of idle time on the client(s)

There are trade-offs to be made about tile-sizes and efficiency, which add further complexity to the UI/settings options. AKA: "Tuning Opportunities"(TO).

Comparing images produced by CPU (without OpenCL) and GPU (with OpenCL) there are visible differences. For animations - currently - combining such sequential frames can be messy. Hopefully heterogeneous CPU/GPU OpenCL netrender will produce homogeneous images! (Compromise on precision, perhaps?...)

BTW, it makes sense to send <batches> of frames to an OpenCl GPU PC, in order to minimise OS/network overheads. OpenCL GPU boost is so fantastic that it's a crying shame to hobble such a thoroughbred by feeding it a frame at a time.

I have a small farm of a Win10 (server) and 9 Mint (client) PCs (i.e. Linux Mint not 'mint condition').  However, only 6 have lowish-spec cards (GTX-750Ti, 960, 1060). Pending the release of OpenCl GPU netrender, I had great fun in hacking an old Python-based server/client job manager wot-I-wrote to farm out an example animation job (Robert Pancoast/menger-coastn) to the MB2 cli on the 6 GPU PCs.
- Total Frames: 35,280
- Total Elapsed : 14:38 hrs***
- Actual Elapsed: 02:31 hrs
- Average frame render times <2 seconds, depending on card type.
- Batch sizes from 10 to 20 depending on card-type.
Wahey! Lots of TO fun still to to be had.

So - Bring on OpenCl Netrender!!

***No idea how long that would take with a single PC without OpenCL - it's too horrible to think about....
Well, actually, using the 9 PCs in non-OpenCL CPU netrender mode for the same example, at 1% of frames, the completion estimate shown at one time was as low(!) as 2d 2h 40m.  Times 9? Ugh!


Offline buddhi

  • *
  • Fractal Friar
  • *
  • Posts: 144
    • Mandelbulber GitHub repository
« Reply #1 on: May 31, 2019, 08:45:12 PM »
NetRender for OpenCL is still not implemented. It will probably come this year.
We have a plan to do two modes for OpenCL NetRender. One for animations, where complete frames will be scheduled. Second for still images where different machines will render sets of tiles.
We are not planning to allow combos CPU + GPU, because of differences in results. OpenCL is very inefficient on CPU (Portable Computing Language - pocl project), so only way to render on CPU is use existing algorithms implemented for CPU. For GPU, because of strong optimizations, algorithms cannot be exactly the same. This is the reason of minor differences in images.
I already have  an idea how to make NetRender clients enough smart to not wait for main node for commands. It was already done for existing NetRender. Every node is mostly independent and only synchronizes scheduler data from time to time. Client decides which image line will be rendered next without overlapping results. Data synchronization is a background process.

Offline Tas_mania

  • *
  • Fractal Friar
  • *
  • Posts: 131
    • West Tamar Talk
« Reply #2 on: June 01, 2019, 01:23:34 AM »
Hi greytery. Thats an interesting render farm you have built. Good explanation from buddhi on why no OCL CPU-GPU rendering. (heterogeneous rendering)

Have you seen LuxCoreRender? An open source python project that can do net rendering.

To me it looks like a lot of duplication with netrendering - every machine must have it's own OpenCL implementation. It would be great if all available OCL devices were part of a single OCL device and the network and clients was transparent.

I'm rendering 3060 frames on a single machine with 2 GPUs at the moment and frame render time is directly proportional to the amount of alpha (transparency) in the image.
It varies from about 30 seconds to over 8 minutes per frame.
Alpha is an easy way to speed-up rendering but then needs other layers as backgrounds.

Offline greytery

  • *
  • Fractal Freshman
  • *
  • Posts: 3
« Reply #3 on: June 02, 2019, 11:46:11 AM »
Thanks for the headsUP @buddhi. Algorithmic differences make sense. To ensure homogeneous images on a heterogenous network (e.g. see below), netrender will need to be aware of the client type and police work allocation to <only> GPU-based <or> CPU-based clients.
Also, it may be necessary to use the CPU option for some images because of a qualitative difference in the algorithms (I don't know), and be able to 'force' CPU processing on a OpenCL GPU client. So, another option on the netrender UI panel to control 'Only OpenCL' ..?
However, having used GPU based, I probably won't be going back to CPU-based rendering!
Looking forward to testing/using the GNetrender.

@Tas_mania, my farm is not so much 'built' as 'evolved' and so more heterogenous than some. It comprises Intel and AMD CPUs, an historical range of Nvidia cards, and at one time there were some old/slow ATI cards.  @buddhi is better placed to take on your comment on 'a single OCL device' but to me, it would appear that each client needs to have its own local set of OCL compiled programs, depending on the graphics drivers on that box.  Add Win/Linux/MacOS to the heterogeneous mix or drivers and 'One OCL to Rule Them All' sounds like a stretch.

Mmm, will look at Alpha ...

Yes, I've played with luxcorerender but not got very far with that (yet). The farm has also been used - amongst others - for Blender, Bryce, Carrara, DAZ3D (using older Lux) and POVRay. They all have quirks and have evolved different approaches to network rendering, which is what makes it interesting. For me, MB2 is winning, ATM, because of the ease with which an infinite amount of beautiful image material can be generated.

Offline hobold

  • *
  • Fractal Furball
  • ***
  • Posts: 287
« Reply #4 on: June 05, 2019, 05:11:04 PM »
The farm has also been used - amongst others - for Blender, Bryce, Carrara, DAZ3D (using older Lux) and POVRay.
This is strictly off-topic. I am a long time PoV-Ray user and fan. I am curious, though, that someone would use it for some "serious" projects. To me, PoV-Ray is a 3D sketch tool that I grew up with (and with PoV as my hammer, I turn many of my graphics problems into nails). But isn't it too slow and unwieldy for large scale, professional productions?

Basically what I am asking is: "hey, that pov project - was it something cool that you can show around?" :)

Offline greytery

  • *
  • Fractal Freshman
  • *
  • Posts: 3
« Reply #5 on: June 06, 2019, 01:56:18 PM »
Yes, off-topic but in a compare-and-contrast sort of way. I was/still am atracted to POVRay because of the way that shapes and images can be generated programatically (- I'm an old coder and I don't always get these new-fangled Gooey toys).  Yes, pov is slow - which is why you need a farm to produce any animations of length. And all good things come to those ...  But then, the same can be said of the MB2 animation workflow - without GPU that is.  It seems that POVRay is unlikely to be able to use GPU acceleration, while MB2 can be made to fly with even 'cheap' graphics cards. One thing, though, is that you don't seem to get as much 'noise' when producing a series of POVRay images.
POVRay has 2D Mandelbrot and Julia fractal generation but I've not explored that - and now, I'm a bit spoilt with 3D MB2.

When generating MB2 images with Primitives, the choice seems a bit limited - compared with the open-ended POVRay definition language. We are talking about completely different engines of course.  But I intend to look at combining MB2 and POVRay images in some way - the hint about Alpha above set me thinking. No idea where that will go yet.

And since you ask, for an old project see:

Offline Tas_mania

  • *
  • Fractal Friar
  • *
  • Posts: 131
    • West Tamar Talk
« Reply #6 on: June 08, 2019, 04:16:40 AM »
Transparent backgrounds in MB2 is not hard. First make a transparent PNG image the same size as your final render. Then use this image in 'Textured background' with map type set to 'flat'.

In this way any other video stream can be composited behind an MB2 fractal series.
I usually keyframe in low-res with a 640 X 360 transparent background, then I go up for the final render. Currently 1600 X 900 is the limit of my hardware.

Using this method you could have a POVRay series behind a MB2 series with another POVRay series on top.   :)

BTW - very cool chess game.

Netrender screen corruption?

Started by SonicConstruction on Mandelbulber

3 Replies
Last post December 28, 2017, 09:26:45 PM
by SonicConstruction
netrender version mismatch

Started by discoordination on Mandelbulber

3 Replies
Last post November 07, 2017, 09:22:20 PM
by discoordination
Netrender stereoscopic parameter issue

Started by Quadisti on Mandelbulber

1 Replies
Last post January 28, 2020, 06:37:52 PM
by buddhi
OpenCL only pink in 2.12/2.13

Started by Micha1982 on Mandelbulber

1 Replies
Last post April 01, 2018, 02:18:34 AM
by Micha1982
Mandelbulber 2.2.16 OpenCL

Started by piotrv on Mandelbulber

2 Replies
Last post December 25, 2018, 01:09:32 PM
by Caleidoscope