2 Replies Latest reply on Dec 26, 2013 10:41 AM by Maksw

    Intel HD 5000 : glEnableVertexAttribArrayARB via GLEW still does not work

    Maksw

      I'm recreating this thread having not received an acceptable answer to the original, and having had no responses despite providing an example as requested by Intel support.

       

      The original thread : https://communities.intel.com/thread/42590mntn

      Perhaps this time, we might get some Christmas magic...

       

      This GLSL tutorial on Normal Mapping...

       

      BumpMapping with GLSL

       

      The Windows download : http://fabiensanglard.net/bumpMapping/win32.zip

       

      Produces this ;

       

      zenFrag 2013-11-11 21-36-05-14.png

       

      I originally reported this against 9.18.10.3165 the driver set. I reproduced the above with 9.18.10.3165 and then installed the latest 15.31.17.64.3257 - which still reproduces the above.

       

      This is infuriating I'm afraid to say - I purchased this HD based machine to do advanced graphics development whilst on the go - and it's effectively useless for this. I've noticed on the forums that plenty of users have reported issue after issue with the OpenGL / GLSL support on the HD range of hardware.

       

      I find it absolutely astounding to find that Intel can release several major versions of the HD driver and still fail to address fundamental flaws in the OpenGL driver support.

       

      How can a company like Intel produce such fundamentally broken software?

       

      Are there any plans to fix these issues? Or have I made a grave mistake in presuming the Intel HD hardware had finally come of age, and sell my laptop in favour of an ATI or nVidia solution?

       

      It seems the latest drivers improve matters - the geometry with the extra vertex attribute now renders correctly instead of as a ball of radius 1.0 - however none of the vertex tangent information is coming through to the shaders. Also - the shaders involved are no longer compiling due to using 'uint' types - which compile perfectly fine on NVidia and ATI based machines. Removing these types, and hard coding the shaders to generate tangent information results in normal mapping working 'as expected' - however this is not an acceptable solution. Message was edited by: Malcolm Sparrow