5 Replies Latest reply on Mar 15, 2016 11:19 AM by Bryce@Intel

    Possible bug(s) in GPU Driver ?




      With a Haswell i7-4770R, Windows 8.1 64 up to date, GPU Intel drivers (and also,

      OpenGL 3.3 with

        static const int context_attribs[] = {




          0 } ;

        GLWin.hRC = wglCreateContextAttribsARB (GLWin.hDC, 0, context_attribs);


      this code does an endless loop without any OpenGL error :


        GLsync g = 0;

        GLint result = GL_SIGNALED;

        glFinish (); // To be sure the OpenGL server is clear and waiting

        g = glFenceSync(GL_SYNC_GPU_COMMANDS_COMPLETE, 0);

        glGetSynciv (g, GL_SYNC_STATUS, sizeof(GLint), NULL, &result);

        // glFlush (); /* Need to avoid the endless loop ??? */

        while (result == GL_UNSIGNALED) {

           printf ("I'm always in the loop\n");

           glGetSynciv (g, GL_SYNC_STATUS, sizeof(GLint), NULL, &result);


        glDeleteSync (g);


      To avoid the endless loop, glFlush must be set after the first glGetSynciv. But it's not need in

      OpenGL API.


      Without the glFlush, the same code in Linux (Fedora 21 64 up to date).

      I have tested a Nvidia card in Linux & Windows: OK without glFlush.



      The only solution I found to avoid glGetSynciv is to remplace it by glClientWaitSync, but I

      have a performance problem.


        GLsync g = 0;

        GLenum result;

        g = glFenceSync(GL_SYNC_GPU_COMMANDS_COMPLETE, 0);

        result = glClientWaitSync (g, GL_SYNC_FLUSH_COMMANDS_BIT, 0);

        while (result == GL_TIMEOUT_EXPIRED) {

          result = glClientWaitSync (g, 0, 0);



      The result is correct, but the performance between Linux & Windows is really odd. When I

      remove this synchronisation, my off screen rendering has a FPS = 3200 -/+ 100

      in Linux and Windows. With this synchronisation on, in Linux, no difference; in Windows, the

      FPS is slowed to ~1550.


      The only reason I see for this difference of performance is the possibility, even with timeout=0

      everywhere, than the code of Windows driver for glClientWaitSync does an equivalent of a Sleep (0) ?


      To confirm my hypothesis, I have tried

      result = glClientWaitSync (g, GL_SYNC_FLUSH_COMMANDS_BIT, 1000000000);

      without any loop. Windows is not a real time system; a real sleep with timeout must be really

      expensive in terms of performance. But my FPS is ~1500 too, same performance than previous test.


      Of course, in Linux, the FPS is always 3200.


      Thanks for reading and for any help!