Quick test: DirectX 12 API Overhead Benchmark results

Published by

Click here to post a comment for Quick test: DirectX 12 API Overhead Benchmark results on our message forum
https://forums.guru3d.com/data/avatars/m/244/244562.jpg
That difference is just crazy. And strange how dx11 single threaded is faster than multi, i wonder why is that
https://forums.guru3d.com/data/avatars/m/56/56686.jpg
I seriously hope DX12 takes off running and finally puts end to DX9 being used and
https://forums.guru3d.com/data/avatars/m/240/240605.jpg
I seriously hope DX12 takes off running and finally puts end to DX9 being used and
Amen! This should´ve happened earlier but hey...
data/avatar/default/avatar18.webp
Someone please correct me if I'm wrong...but these results (and those posted by Valerys) seem to be saying that developers should actually move towards MANTLE and not DX12. Can someone explain why DX12 is better than Mantle (and for the sake of argument, just ignore nvidia, I'm aware that their cards don't support Mantle...yet...)
https://forums.guru3d.com/data/avatars/m/225/225084.jpg
I don't think Mantle will beat DX12 once the drivers and the new API will mature. Plus remember Mantle is AMD only. I believe DX12 will improve a lot more over time as well. I'm also pretty sure people will adopt Win 10 very quickly, especially gamers.
data/avatar/default/avatar29.webp
If it improves the "Draw calls", think about this... Climbing a huge snowy mountain in Skyrim and the snowflakes that fall from the sky are thick and are actual objects as apposed to some cheap sprite trick. Each small snowflake amoungst thousands of snowflaes falling and gathering on the side of the mountain, each with their own physics. Can you say "SSX?" 🙂 bran new snowboarding games, and realistic winter season. Rain fall....actual rain drops that gather and pool a huge flood. Not the cheap tricks in COD or BF4 where the water fills from a trap door in the ground. A dense jungle where every leaf is a separate object, building a stem, into branches, into tree's. Tiny bugs on each little leaf. Tiny bugs on the tiny bugs.
data/avatar/default/avatar31.webp
Because DirectX 12 can be used at a mainstream level and Mantle is soon become obsolete, it has been integrated into OpenGL and became Vulkan which will also be a mainstream option and available to all graphic card vendors. Choosing one over another is a matter of preference, the engineering team's ability to work with the API and of course the 'bonuses' that the graphic card vendors offer to the developers. And don't forget DirectX 12 itself, the required drivers and the OS that supports it are all still a work in progress, performance may improve before it is released.
https://forums.guru3d.com/data/avatars/m/123/123760.jpg
Someone please correct me if I'm wrong...but these results (and those posted by Valerys) seem to be saying that developers should actually move towards MANTLE and not DX12. Can someone explain why DX12 is better than Mantle (and for the sake of argument, just ignore nvidia, I'm aware that their cards don't support Mantle...yet...)
Adoption rate, ease of development, ... many reason to prefer DX12 aside from performance.
https://forums.guru3d.com/data/avatars/m/239/239477.jpg
My results: DX11 Single: 1,010,597 DX11 Multi: 985,836 Mantle: 12,108,110 Dx12:: 12,958,613 Intesting to note DX11 multi is actually slower, and for me DX12 is faster than mantle (AMD probably hasn't optimized mantle for 7xxx series as much as the 290 series)
https://forums.guru3d.com/data/avatars/m/156/156133.jpg
Moderator
Someone please correct me if I'm wrong...but these results (and those posted by Valerys) seem to be saying that developers should actually move towards MANTLE and not DX12. Can someone explain why DX12 is better than Mantle (and for the sake of argument, just ignore nvidia, I'm aware that their cards don't support Mantle...yet...)
Nvidia does not want to support Mantle. Nvidia supports Direct X because it is from Microsoft, which is one of their biggest resources.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
My results: DX11 Single: 1,010,597 DX11 Multi: 985,836 Mantle: 12,108,110 Dx12:: 12,958,613 Intesting to note DX11 multi is actually slower, and for me DX12 is faster than mantle (AMD probably hasn't optimized mantle for 7xxx series as much as the 290 series)
In most tests I see, DX12 is faster then Mantle even on the 290x. I think it has to do with the number of cores, Mantle scales better on a higher number of cores/threads then DX12 does.
https://forums.guru3d.com/data/avatars/m/132/132389.jpg
I was thinking "I'm gonna run this benchmark", then I realized I'm on Windows 7 and M$ don't update their old OSes to have new APIs anymore. I haven't used a Windows beta as my main OS since Vista. Which oddly I had no issues with at all after my sound card drivers were fixed to work with it. Vista and 7 look and function so similarly it feels like I've been using the same OS since 2006. If Win 10 is released in 2016 that'll be a decade of the same look/performance as far as OSes go for me, now I know how Mac users feel.
https://forums.guru3d.com/data/avatars/m/242/242471.jpg
I got ~ 1million draw calls boost by dx11 single threaded when going from 1333 9-9-9-24-2t (20gb/s, 65ns) to 2400 10-12-12-31-1t(36gb/s, 41ns) By multi it was less, ~ 100k calls difference.. Still something 🤓 I see dx12 makes a nice boost
data/avatar/default/avatar21.webp
It looks like NVIDIA is miles ahead of AMD in DX11 driver efficiency.
Unfortunately yes, it is. Sadly several people exposed the issue on the AMD subforum just to have their threads closed. Hopefully now more people will take notice and AMD will finally do something about it. I get it, it's very good in DirectX 12 and Mantle but we don't live in the future and the present is taken by DirectX 11.
https://forums.guru3d.com/data/avatars/m/244/244562.jpg
edit: i'm stupid
data/avatar/default/avatar14.webp
That difference is just crazy. And strange how dx11 single threaded is faster than multi, i wonder why is that
When your program/game is using fewer CPU cores, the CPU can Turbo the cores used higher than when all the cores in the CPU are used. Take an AMD FX 8350 for example. It's a stock 4 GHz CPU. With all cores loaded, it can Turbo up to 4.2 GHz. With 4 or less cores used, it can Turbo to 4.3 (or was it 4.4?). Same goes for Intel. You get the idea.
data/avatar/default/avatar18.webp
When your program/game is using fewer CPU cores, the CPU can Turbo the cores used higher than when all the cores in the CPU are used. Take an AMD FX 8350 for example. It's a stock 4 GHz CPU. With all cores loaded, it can Turbo up to 4.2 GHz. With 4 or less cores used, it can Turbo to 4.3 (or was it 4.4?). Same goes for Intel. You get the idea.
That's not the reason. My CPU is overclocked at fixed 3.8Ghz without Turbo yet see the results above.