1 Reply Latest reply on Mar 29, 2018 6:36 AM by Stefan3D

    OGL shaders 64bit vs 32bit

    dr_duban

      Don't really know where to ask, so I'm asking here.

      I have this strange issue with OpenGL, the output from fragment shader is different when application is built for 64bit Windows, and when it is built 32bit.

       

      I have framebuffer backed by textures with two color attachments

      In fragment shader I do something like this:

      ==========================================================

      #version 330 core

       

      layout(location = 0) out vec4 p_color;

      layout(location = 1) out vec4 x_info;

      ...

      uniform int i_id;

       

      void main() {

          p_color = vec4(1.0);

          x_info = vec4(0.0, 0.0, 0.0, intBitsToFloat(i_id));

      }

      ==========================================================

      i_id is such that casting it to float is a NaN

       

      and in application I read second color attachment with OGL.GetTexImage(OGL.TEXTURE_2D, 0, OGL.RGBA, OGL._FLOAT, buf_ptr);

      the problem is that in 64bit version the output is correct (as I wanted it to be), and in 32bit output values are set to 0xFFC00000 (so they are converted to floating point indefinite)

       

      and question here, does any one else has this problem? How to fix it, so the output in 32 is the same as in 64? Is it a driver bug?

       

      I'm running it on HP ProBook with

      Intel(R) HD Graphics 620

      OpenGL 4.4.0 - Build 21.20.16.4541