2 Replies Latest reply on Dec 17, 2015 2:15 AM by jtoma

    Loading texture to integer internal format resukts in all zeroes

    jtoma

      Hi all,

       

      I'm using OpenGL to convert video frames from 10-bit YUV420p to 8-bit RGB. YUV frame data is loaded as a texture with:

      glTexImage2D(GL_TEXTURE_2D, 0, GL_R16UI, m_frameWidth, m_frameHeight + m_frameHeight / 2, 0, GL_RED_INTEGER, GL_UNSIGNED_SHORT, videoFrame.data());

       

      In the fragment shader it's accessed with:

      #version 130
      
      // irrelevant variables definitions here
      
      uniform usampler2D frameTex;
      
      void main()
      {
          // component value is saved on 10 least significant bits,
          // so to normalize it divide by maximum value that can be coded on 10 bits (2^10 - 1 = 1023)
          float Y = float(texture(frameTex, vec2(gl_TexCoord[0].s, gl_TexCoord[0].t * YHeight)).r) / 1023.0;
          float U = float(texture(frameTex, vec2(gl_TexCoord[0].s / 2, UOffset + gl_TexCoord[0].t * UHeight)).r) / 1023.0;
          float V = float(texture(frameTex, vec2(gl_TexCoord[0].s / 2, VOffset + gl_TexCoord[0].t * VHeight)).r) / 1023.0;
      
          gl_FragColor = vec4(HDTV * vec3(Y, U, V), 1.0);
      }
      

       

      Now, all the texels I get with texture() have value (0, 0, 0, 1).

      The very same code works when I switch application to use discrete nVidia card.

       

      What would be a problem here?

       

      My system configuration:

      Windows 8 64-bit, i7-3740QM with HD Graphics 4000 with driver version 9.17.10.2843 (the newest available for my Lenovo laptop) and discrete nVidia Quadro K1000M.