Dear Support Team,
I found a serious bug in the current graphics driver for the HD 4000. Our application shows very wrong OpenGL rendering for big datasets.
A long debug session showed that glDrawArray accesses wrong vertices in the bound vertex buffer object. This happens only if glVertexAttribPointer is called with a stride or offset bigger than 2048 bytes. The value for the stride is messed up, but constant. For values smaller than 2048 bytes our application runs fine.
My first guess for the reason is that the stride and offset values are accidentally casted to 16 bit floating point values, which have a 10 bit mantissa and cause precision loss for integer numbers bigger than 11 bit.
Please fix this issue with high priority as there is no easy workaround. We bought hundreds of new Ivy Bridge laptops for our users and all are affected. And I am sure that many other applications will fail as it is very common to use vertex buffer strides and offsets bigger than 2048 bytes. I assume that most of them will not even know why their applications fail ...
Thanks in advance,
OpenGL version: 3.3 ("3.3.0 - Build 18.104.22.16861")
OpenGL shading language version: 3.3 ("3.30 - Intel Build 22.214.171.12461")
OpenGL renderer: "Intel(R) HD Graphics 4000"
OpenGL context: core profile, forward compatible
Here is the minimal glut example, which only draws one white triangle. It takes the stride as parameter on the command line and reduces it to the next multiple of 4.
Up to 2048 to program runs fine, 2052 and more does not render anything.
By the way, in the example I used glVertexPointer, which shows the same bug like glVertexAttribPointer.
void init(int stride)
stride = (stride < 8 ? 2 : stride /= 4);
static float *vertices = new float[stride * 2 + 2];
vertices[stride * 0 + 0] = 0.0f;
vertices[stride * 0 + 1] = 0.0f;
vertices[stride * 1 + 0] = 1.0f;
vertices[stride * 1 + 1] = 0.0f;
vertices[stride * 2 + 0] = 0.0f;
vertices[stride * 2 + 1] = 1.0f;
glVertexPointer(2, GL_FLOAT, (stride != 2 ? stride * 4 : 0), vertices);
glDrawArrays(GL_TRIANGLES, 0, 3);
void reshape(int w, int h)
glViewport(0, 0, w, h);
int main(int argc, char* argv)
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA);
init(argc > 1 ? std::atoi(argv) : 0);