(Solved) OpenGL Core Profile

  • 19 Replies
  • 771 Views

0 Members and 1 Guest are viewing this topic.

Offline claude

  • *
  • 3f
  • ******
  • Posts: 1347
    • mathr.co.uk
« on: May 22, 2018, 06:45:39 AM »
EDIT Mesa upgraded now supports compatibility profile for 4.5 so this is not a problem any more...

Excerpt from glxinfo with my AMD RX 580 GPU:

Code: [Select]
OpenGL core profile version string: 4.5 (Core Profile) Mesa 18.0.3
OpenGL core profile shading language version string: 4.50
OpenGL version string: 3.0 Mesa 18.0.3
OpenGL shading language version string: 1.30

What this means: I can only get OpenGL 3.0 / GLSL 130 in Fragmentarium, as higher versions of OpenGL in Mesa amdgpu driver require a "Core Profile" context, something that FragM doesn't yet do (default is compatibility context, which Mesa only supports as high as version 3.0).  And without core profile / opengl 4 I can't use double...

https://wiki.qt.io/How_to_use_OpenGL_Core_Profile_with_Qt has some notes it seems.  I don't know anything about coding Qt so I doubt I could fix all of this without lots of help...


Linkback: https://fractalforums.org/fragmentarium/17/opengl-core-profile/1358/
« Last Edit: August 25, 2019, 06:42:53 PM by claude, Reason: Mesa supports compatibility profile now... »

Offline claude

  • *
  • 3f
  • ******
  • Posts: 1347
    • mathr.co.uk
« Reply #1 on: May 22, 2018, 07:57:20 AM »
So I hacked on it a bit, got it creating an OpenGL 4.1 Core Profile context (this is the latest version supported on OS X, in case cross platform is needed).

But the shaders don't work as they use removed features, and the replacement (uniforms) would require much invasion in the main fragm code....

Example:

Code: [Select]
Parse: /home/claude/opt/fragmentarium/Fragmentarium-2.0.0/Examples/Tutorials/00 - Simple 2D system.frag
Including file: /home/claude/opt/fragmentarium/Fragmentarium-2.0.0/Examples/Include/2D.frag
Camera: Click on 2D window for key focus. See Help Menu for more.
Created front and back buffers as RGBA32F
Maximum texture size: 16384x16384
Could not create vertex shader: 0:21(16): error: `gl_Vertex' undeclared
0:22(15): error: `gl_ProjectionMatrix' undeclared
0:22(35): error: `gl_Vertex' undeclared
0:22(15): error: operands to arithmetic operators must be numeric
0:22(14): error: type mismatch
0:23(13): error: `gl_ProjectionMatrix' undeclared
0:23(33): error: `gl_Vertex' undeclared
0:23(13): error: operands to arithmetic operators must be numeric
0:23(12): error: type mismatch
0:23(12): error: operands to arithmetic operators must be numeric
0:23(11): error: operands to arithmetic operators must be numeric
0:23(11): error: operands to arithmetic operators must be numeric
0:24(17): error: `gl_ProjectionMatrix' undeclared
0:24(12): error: cannot construct `vec2' from a non-numeric data type
0:24(12): error: operands to arithmetic operators must be numeric
0:24(12): error: operands to arithmetic operators must be numeric
0:24(12): error: operands to arithmetic operators must be numeric

Failed to compile script (2 ms).

Offline 3DickUlus

  • *
  • 3f
  • ******
  • Posts: 1488
    • Digilantism
« Reply #2 on: May 22, 2018, 05:55:54 PM »
Hmm... seems ok on nvidia... v4+ I recall setting core profile to the highest available and falling back to compat but its been awhile since ive looked at the init code...

The nvidia shader compiler is more forgiving than amd, it will patch a frag and run it anyways with a  warn where amd will fail with error. I will look at this asap... after ipv6?

Edit: double type works fine on nV.
Fragmentarium is not a toy, it is a very versatile tool that can be used to make toys ;)

https://en.wikibooks.org/wiki/Fractals/fragmentarium

Offline 3DickUlus

  • *
  • 3f
  • ******
  • Posts: 1488
    • Digilantism
« Reply #3 on: May 23, 2018, 06:02:16 AM »
FragM informs me...
Code: [Select]
NVIDIA Corporation GeForce GTX 760/PCIe/SSE2
This video card supports: OpenGL , 1.1, 1.2, 1.3, 1.4, 1.5, 2.0, 2.1, 3.0, 3.2, 3.3, 4.0, 4.1, 4.2, 4.3
Available output formats: bmp, bw, cur, dds, eps, epsf, epsi, icns, ico, jp2, jpeg, jpg, pbm, pcx, pgm, pic, png, ppm, rgb, rgba, sgi, tga, tif, tiff, wbmp, webp, xbm, xpm, exr
...not AMD I know, but FragM is capable, it's just that the frags haven't transitioned from legacy to modern so most of them will require some editing

The parser is compatible with dvec types and double types (note 14 decimal places in all sliders except int)

the file Experimental/DoubleTest.frag begins with...
Code: [Select]
#version 400 compatibility

#include "Complex.frag"
#include "Progressive2D-4.frag"
...and seems to work quite well here ??? "compatibility" just ensures that the legacy bits still work as expected, my installed version of Mesa is 11.2.2-166.1but with the nVidia drivers installed I don't think I'm using Mesa at all.

Progressive2D-4.frag and BufferShader-4.frag have been reworked for double types and you are familiar with Complex.frag.

From glxinfo...
Code: [Select]
OpenGL core profile version string: 4.5.0 NVIDIA 384.111
OpenGL core profile shading language version string: 4.50 NVIDIA
OpenGL ES profile version string: OpenGL ES 3.2 NVIDIA 384.111
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.20

I think Mesa is the culprit, glxinfo does not mention Mesa anywhere in it's output on my machine.
« Last Edit: May 23, 2018, 06:27:53 AM by 3DickUlus »

Offline 3DickUlus

  • *
  • 3f
  • ******
  • Posts: 1488
    • Digilantism
« Reply #4 on: May 23, 2018, 06:39:17 AM »
oh, when you run FragM from a console it will output all of the widgets it creates... some internal program vars and gl specific vars (mislabeled)
Code: [Select]
shaderProgram
"DOUBLE"         "AAExp" "2"
"DOUBLE"         "AARange" "1"
"DOUBLE"         "B" "0.75"
"DOUBLE"         "Bailout" "45"
"DOUBLE_VEC2"    "Center" "-0.241051197052,0.38701269030571"
"DOUBLE"         "ColDiv" "10"
"INT  "          "Formula" "0"
"DOUBLE"         "G" "0.5"
"DOUBLE"         "Gamma" "2"
"BOOL "          "GaussianAA" "true"
"BOOL "          "Invert" "false"
"DOUBLE_VEC2"    "InvertC" "0,0"
"INT  "          "Iterations" "1000"
"BOOL "          "Julia" "false"
"DOUBLE_VEC2"    "JuliaXY" "-0.22222219407558,0"
"BOOL "          "Meta" "false"
"DOUBLE"         "Power" "2"
"INT  "          "PreIter" "0"
"DOUBLE"         "R" "0.25"
"DOUBLE"         "RetryBias" "0"
"DOUBLE"         "RetryEntry" "0"
"INT  "          "RetryMax" "0"
"BOOL "          "Sin" "false"
"BOOL "          "StalksInside" "false"
"BOOL "          "StalksOutside" "false"
"INT  "          "TrigIter" "5"
"DOUBLE"         "TrigLimit" "1.1"
"DOUBLE"         "Zoom" "0.64841693546845"
"SAMPLER_2D"     "backbuffer" "internal fragment variable"
"FLOAT_VEC2"     "pixelSize" "internal fragment variable"
"INT  "          "subframe" "internal fragment variable"
"FLOAT_MAT4"     "gl_ProjectionMatrix" "internal fragment variable"
 
bufferShaderProgram
"DOUBLE"         "Brightness" "1"
"DOUBLE"         "Contrast" "1"
"DOUBLE"         "Exposure" "1"
"DOUBLE"         "Gamma" "2"
"DOUBLE"         "Saturation" "1"
"INT  "          "ToneMapping" "1"
"SAMPLER_2D"     "frontbuffer" "internal fragment variable"
"FLOAT_MAT4"     "gl_ProjectionMatrix" "internal fragment variable"

Offline claude

  • *
  • 3f
  • ******
  • Posts: 1347
    • mathr.co.uk
« Reply #5 on: May 23, 2018, 02:16:50 PM »
Mesa amdgpu is the driver for my hardware (currently inaccesible because I need to RMA the CPU for an unrelated issue).
The Mesa driver supports GL 4.5 but only in core profile.  In compatibility profile the highest supported is GL 3.0.
Core profile removes obsolete features from GL, like glBegin() glVertex3f() etc, so the FragM C++ code needs updating.
The last issue I ran into was non-working BufferShader (and the fixed-function pipeline is gone in core profile), afaict.  The Qt GL wrapper is a pain because it doesn't seem to crash, so finding where the obsolete gl functions are called from is harder than necessary...
I started trying to do that in a fork: https://github.com/claudeha/FragM/tree/core-profile but I didn't get it working yet.
In addition to this, Qt has deprecated the GL widget used in FragM, should be updated to QGLSurface or similar, but that can wait another time I think...
When I get a fresh CPU back (may take a couple of weeks) I can resume work on this.

Offline 3DickUlus

  • *
  • 3f
  • ******
  • Posts: 1488
    • Digilantism
« Reply #6 on: May 23, 2018, 06:26:47 PM »
I have been hesitant to do that because once FragM is updated to new GL all of the frags will also need to be updated.
As of now we have compatibility mode and still suporting all the legacy stuff, not the best but a reasonable compromise until the engine can be overhauled... on my todo list  ;)

Offline claude

  • *
  • 3f
  • ******
  • Posts: 1347
    • mathr.co.uk
« Reply #7 on: May 23, 2018, 07:11:20 PM »
So far I made changes behind a #define, rather than deleting the old code.  Better would be to make it runtime selectable via the GUI (choose core profile or compatibility profile, with the default to compat for compat).  But I probably won't be able to work on it any more until mid June.

Offline 3DickUlus

  • *
  • 3f
  • ******
  • Posts: 1488
    • Digilantism
« Reply #8 on: May 23, 2018, 11:41:29 PM »
Core vs compatibility is selectable via the #version statement afaik.
I dont think any adjustments are needed in the C++ code.

The NV version uses some nVidia specific features while th non-nV version doesnt, there is a USE_NVIDIA #define NVIDIAGL4PLUS that sets it, try commenting it out, that should make it a bit more compatible with other cards

edit: oops, it's been far too long since I've last played around with it  :embarrass:
« Last Edit: May 24, 2018, 03:45:18 AM by 3DickUlus »

Offline claude

  • *
  • 3f
  • ******
  • Posts: 1347
    • mathr.co.uk
« Reply #9 on: May 23, 2018, 11:48:38 PM »
Core vs compatibility is selectable via the #version statement afaik.
I dont think any adjustments are needed in the C++ code.

The core profile removes some features from the C API, like glBegin etc.  So the C++ code definitely needs changing to work with a core profile OpenGL context. See the first few lines of https://wiki.qt.io/How_to_use_OpenGL_Core_Profile_with_Qt

The #version in the GLSL is another thing, not sure how it interacts with the GL context version.  In my code I've usually kept them matching (eg OpenGL 3.3 core profile with "#version 330 core" in the shader, or OpenGL 4.1 core profile with "#version 410 core" in the shader)

Offline 3DickUlus

  • *
  • 3f
  • ******
  • Posts: 1488
    • Digilantism
« Reply #10 on: May 24, 2018, 02:32:04 AM »
here is lines 550 - 580 MainWindow.cpp ...
Code: [Select]
...
    /// Default QGLFormat settings
    //    Double buffer: Enabled.
    //    Depth buffer: Enabled.
    //    RGBA: Enabled (i.e., color index disabled).
    //    Alpha channel: Disabled.
    //    Accumulator buffer: Disabled.
    //    Stencil buffer: Enabled.
    //    Stereo: Disabled.
    //    Direct rendering: Enabled.
    //    Overlay: Disabled.
    //    Plane: 0 (i.e., normal plane).
    //    Multisample buffers: Disabled.

    QGLFormat fmt;
    fmt.setDoubleBuffer(false);
    fmt.setStencil(false);
    fmt.setDepthBufferSize(32);
    QSettings settings;
    int i = settings.value("refreshRate", 20).toInt();
    fmt.setSwapInterval(i);

    engine = new DisplayWidget(fmt, this, splitter);
    engine->makeCurrent();
    engine->show();
    if(!engine->init()) CRITICAL(tr("Engine failed to start!"));
    engine->updateRefreshRate();
...

the commented lines are the default settings for when created like  engine = new DisplayWidget();
we take care of buffer swap for rendering and accumulation so..  fmt.setDoubleBuffer(false);
we don't use a stencil so...   fmt.setStencil(false);
and we want full 32bit depthbuffer so...  fmt.setDepthBufferSize(32); default is combined 8bit stencil + 24bit depth = 32bits

this sets up our initial context to get things going and it gets modified at fragment compile time as per the #version settings, in core profile, v1.4 and up, none of the shaders will compile, the only thing that lets them work in their current state is compatibility profile.

the "new" way for declaring uniforms is in blocks...

Code: [Select]
uniform blockname
{
 float someuni;
 float oneuni;
 vec2 twouni;
}

this format lends itself well to describing a tab full of widgets :D with some adjustments to the parser this should work for both single and blocks of uniforms.

Offline claude

  • *
  • 3f
  • ******
  • Posts: 1347
    • mathr.co.uk
« Reply #11 on: May 24, 2018, 03:00:28 AM »
got it compiling and not crashing on my laptop with
Code: [Select]
cmake .. -DCMAKE_INSTALL_PREFIX=$HOME/opt/FragM -DNVIDIAGL4PLUS=OFF -DOPENGLCORE33=ONstill not working though (blank image)

Offline 3DickUlus

  • *
  • 3f
  • ******
  • Posts: 1488
    • Digilantism
« Reply #12 on: May 24, 2018, 04:38:12 AM »
Rebuilding the engine for modern compliance is the next step in my plans for FragM, I've been wanting to do it for quite a while now but just haven't the time to dedicate to the task, something else that makes me hesitant is that once I start reworking the engine I think I will find that the entire thing will need a lot of re-coding to be compliant with the latest Qt and GL libs, not to mention tidying up the coding style.

Hardware sensing and user selectable context profile for nVidia, AMD and intel GPUs at startup might be a good thing to do first.

Perhaps starting with the simplest Qt GL example and building from that, following recommended practice for Qt GL, would make for an overall better program in the end. My main goal would be to preserve Syntopia's original intent: an experimental GLSL development environment that can also be used to render fractals.

Another angle might be to go back to his original code, bring that up to date re:GL (smaller codebase easier to work with) and add all of the features/changes I've made but with a good deal of hindsight that I never had when I started work on this many years ago now.

Your input on this would be most valuable. :beer:

Offline 3DickUlus

  • *
  • 3f
  • ******
  • Posts: 1488
    • Digilantism
« Reply #13 on: May 24, 2018, 06:51:21 AM »
try this...

in  lines 550 - 580 MainWindow.cpp after fmt.setSwapInterval(i);
try adding the line...
fmt.setVersion( 4, 5 );

the version major/minor could be set via "Preferences" ... could set automatically on first run and be available to change to whatever user wants ??? with just a very few lines of code :D this can be tested and used to determine the engine's GL calls so legacy frags will still be useful and run.... just a thought

currently not setting a particular version, just letting Qt set it to what ever it needs/wants and going with #version setting in the frag code.


Offline claude

  • *
  • 3f
  • ******
  • Posts: 1347
    • mathr.co.uk
« Reply #14 on: May 24, 2018, 08:57:38 PM »
fmt.setVersion( 4, 5 );

the version major/minor could be set via "Preferences"

I hoped so, but sadly it's not that simple: the inheritance of DisplayWidget in the C++ depends on the version:
https://github.com/claudeha/FragM/blob/core-profile/Fragmentarium-Source/Fragmentarium/GUI/DisplayWidget.h#L81
(I just added an extra case in this fork)


xx
"Time Span"

Started by cricke49 on Fractal Image Gallery

0 Replies
413 Views
Last post August 02, 2018, 07:05:21 AM
by cricke49
question
[Solved] Color interpolation

Started by galac on Programming

5 Replies
356 Views
Last post March 02, 2019, 09:03:11 AM
by mclarekin
xx
Holographic core chamber

Started by kohlenstoff on Fractal Image Gallery

1 Replies
333 Views
Last post February 10, 2019, 06:15:37 AM
by The_Blind_One
xx
Birdie Style

Started by gannjondal on Fractal Image Gallery

1 Replies
464 Views
Last post May 08, 2018, 02:39:37 PM
by who8mypnuts
clip
Neural Style Transfer with Fractal Art

Started by reallybigname on Other Artforms

1 Replies
262 Views
Last post July 20, 2019, 04:25:41 PM
by reallybigname