This content has been marked as final. Show 2 replies
I am developing applications to run on Intel HD graphics.
I have a Nuc with HD 5000 and notebook with HD 5300.
At very high, approximately 4K resolutions, such as 3200x1800, I am seeing a significant performance decrease.
At lower resolutions such as 1920x1080, the difference between DX11 swap effects and OpenGL are insignificant. I am concerned that either I've misconfigured something, or the DirectX 11 run-time or drivers are performing unnecessary copies or work.
When configuring the swap chain using DirectX 11, I can select a swap effect of DXGI_SWAP_EFFECT_DISCARD or DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL. The DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL requires vsync and triple buffering but can maintain 60hz, while DXGI_SWAP_EFFECT_DISCARD will run at about 40hz for the same display content. Another way to gain some performance is to request full screen exclusive use via the sway chain description.
I currently don't have an exact comparison, but an older version of the application, running OpenGL, also maintains 60hz whether running in a window or fullscreen window.
My question is: Why the slow performance flipping or blitting at very high resolutions with DirectX 11, particularly compared to OpenGL?
OpenGL does not appear to suffer the same performance impact. Changing swapchain effect to DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL helps a lot, as does fullscreen exclusive, however I'd like flexible display configuration without compromising performance.