3 Replies Latest reply on Jun 24, 2017 9:06 AM by Stefan3D

    GLSL - Shaders issues


      Hi, I am here to report potential bugs where all Intel HD/Iris seams to be impacted.


      Test platform

      CPU: i7-6700HQ

      GPU: Intel HD 530 (drivers:

      OS: Windows 10

      Software: CEMU


      Shaders use more RAM than needed

      It seems like Intel's driver can commit up to several megabytes of RAM (CPU ram, not VRAM) for linked GLSL shader programs. As one would suspect, the amount of consumed RAM is somewhat proportional to the complexity of the shader program. However, it still seems much higher than it needs to be. For example, some of our more complex shader programs easily exceed 2MB. When dealing with high quantity of shaders, this becomes a huge problem.


      Longer description:

      In the application (CEMU) we generate shaders dynamically and they often end up being quite complex (Example Vertex and Fragment shader). Furthermore, we deal with large amounts of shaders, in the range of 5k to 20k. The problem we are facing is that the graphics driver allocates up to GBs of RAM just for compiled shaders. The question is, is this intended behavior or a bug? We already double and triple checked to make sure this is not a mistake on our end.


      Here is a test application to demonstrate the issue. Source is available here (VS2015). It links one set of vertex + fragment shader 1000 times and then prints the amount of RAM commited by the application. The application itself does not allocate any extra memory. Additionally, the .zip comes with multiple sets of example shaders taken from our application to see the difference in RAM usage. For more details see main.cpp


      Shaders get corrupted when stored and reloaded

      On the same application we save compiled shaders for them to be reloaded at next launch with less calculation. It appears OpenGL implementation in Intel drivers is incorrect and shaders get corrupted when storing and reloading it via OpenGL's glGetProgramBinary() & glProgramBinary().


      Some other observations that has been made:

      Occurs on all driver versions and all Windows versions

      RAM usage is proportional to complexity of shader (no surprise here)

      Conditionals (if clauses and '?' operator) seem to massively increase RAM usage and compile times

      The size of uniform buffer arrays only slightly affect RAM usage

      Detaching and deleting shaders (glDetachShader+glDeleteShader) after glLinkProgram helps only a bit

      Calling glDeleteProgram() correctly releases all memory, indicating there is no leak

      Same problem occurs when the shader programs are loaded via glProgramBinary


      Thanks in advance!