Graphics
Intel® graphics drivers and software, compatibility, troubleshooting, performance, and optimization
20493 Discussions

HD 4000 OpenGL bug in glVertexAttribPointer

idata
Employee
1,472 Views

Dear Support Team,

I found a serious bug in the current graphics driver for the HD 4000. Our application shows very wrong OpenGL rendering for big datasets.

A long debug session showed that glDrawArray accesses wrong vertices in the bound vertex buffer object. This happens only if glVertexAttribPointer is called with a stride or offset bigger than 2048 bytes. The value for the stride is messed up, but constant. For values smaller than 2048 bytes our application runs fine.

My first guess for the reason is that the stride and offset values are accidentally casted to 16 bit floating point values, which have a 10 bit mantissa and cause precision loss for integer numbers bigger than 11 bit.

Please fix this issue with high priority as there is no easy workaround. We bought hundreds of new Ivy Bridge laptops for our users and all are affected. And I am sure that many other applications will fail as it is very common to use vertex buffer strides and offsets bigger than 2048 bytes. I assume that most of them will not even know why their applications fail ...

Thanks in advance,

Woodstock

OpenGL version: 3.3 ("3.3.0 - Build 8.15.10.2761")

OpenGL shading language version: 3.3 ("3.30 - Intel Build 8.15.10.2761")

OpenGL renderer: "Intel(R) HD Graphics 4000"

OpenGL context: core profile, forward compatible

0 Kudos
4 Replies
idata
Employee
490 Views

Update:

The same bug is also in the driver version 9.17.10.2792, and it is in both the core and compatibility profile.

0 Kudos
ROBERT_U_Intel
Employee
490 Views

Could you post steps to reproduce the issue?

0 Kudos
idata
Employee
490 Views

Sure, only one step is neccessary: Call glVertexAttribPointer with a stride or offset bigger than 2048 bytes ...

I will post a minimal glut example tomorrow.

0 Kudos
idata
Employee
490 Views

Here is the minimal glut example, which only draws one white triangle. It takes the stride as parameter on the command line and reduces it to the next multiple of 4.

Up to 2048 to program runs fine, 2052 and more does not render anything.

By the way, in the example I used glVertexPointer, which shows the same bug like glVertexAttribPointer.

# include

# include

void init(int stride)

{

stride = (stride < 8 ? 2 : stride /= 4);

static float *vertices = new float[stride * 2 + 2];

vertices[stride * 0 + 0] = 0.0f;

vertices[stride * 0 + 1] = 0.0f;

vertices[stride * 1 + 0] = 1.0f;

vertices[stride * 1 + 1] = 0.0f;

vertices[stride * 2 + 0] = 0.0f;

vertices[stride * 2 + 1] = 1.0f;

glVertexPointer(2, GL_FLOAT, (stride != 2 ? stride * 4 : 0), vertices);

glEnableClientState(GL_VERTEX_ARRAY);

}

void display(void)

{

glClear(GL_COLOR_BUFFER_BIT);

glDrawArrays(GL_TRIANGLES, 0, 3);

glutSwapBuffers();

}

void reshape(int w, int h)

{

glViewport(0, 0, w, h);

}

int main(int argc, char* argv[])

{

glutInit(&argc, argv);

glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA);

glutInitWindowSize(600, 600);

glutCreateWindow("Intel bug");

init(argc > 1 ? std::atoi(argv[1]) : 0);

glutDisplayFunc(display);

glutReshapeFunc(reshape);

glutMainLoop();

return 0;

}

0 Kudos
Reply