Sniper Elite 4 will get DirectX 12 support

Published by

Click here to post a comment for Sniper Elite 4 will get DirectX 12 support on our message forum
data/avatar/default/avatar30.webp
It's all great that DX12 will be supported at launch but will it do anything? Most games that have DX12 right now either run worse or exactly the same. There a few titles that improve performance a bit on AMD cards, but do nothing on Nvidia cards. As it stands, DX12 is not the revolutionary tech we all expected it to be, and publishers/devs announcing DX12 support is more of for marketing hype than anything else. On the other hand, Vulkan is much more exciting, if they announced Vulkan support I'd be on board with it, but at the moment, DX12, meh.
https://forums.guru3d.com/data/avatars/m/198/198862.jpg
Good thing they included dx11 as well so we can compare.
https://forums.guru3d.com/data/avatars/m/88/88351.jpg
Most games that have DX12 right now either run worse or exactly the same. There a few titles that improve performance a bit on AMD cards, but do nothing on Nvidia cards.
Maybe not more fps, but better cpu usage? So the card type doesn't matter, I thought the main benefit of 12 is more consistent cpu support. Of course it would be nice if 12 always resulted in higher fps, instead of sometimes worse, though.
data/avatar/default/avatar20.webp
Maybe not more fps, but better cpu usage? So the card type doesn't matter, I thought the main benefit of 12 is more consistent cpu support. Of course it would be nice if 12 always resulted in higher fps, instead of sometimes worse, though.
One of the weird things I noticed is increased CPU usage with DX12, but no actual performance improvement despite that increased usage. Which begs the question, why should I stress my CPU harder if I don't gain anything from it? Increased power consumption and heat generation but no performance improvement doesn't really put DX12 in a good light.
https://forums.guru3d.com/data/avatars/m/263/263845.jpg
Higher lowest/lower fps, overall smoother experience, that is what dx12 does at my end.
https://forums.guru3d.com/data/avatars/m/87/87487.jpg
You know one time I would have been excited at the announcement of a game getting DirectX 12 support but that was before its release when it was hyped as the Second Coming for games. Here and now the reality is that it is, IMO, a disappointing API offering little or no improvement over DirectX 11 (and in some cases worst performance or fewer graphics options as is the case with Rise of the Tomb Raider which loses VXAO support) and certainly no enhanced visual effects. It's hard not to feel underwhelmed by it to be honest. Mind you, I feel exactly the same way about Vulkan which in DOOM actually runs worst on my PC than OpenGL at maxed out 4K settings and certainly feels, for want of a better word, laggier in play. I guess the truth is that NVIDIA enhanced the hell out of DirectX 11, certainly compared with AMD who are generally considered to have a higher overhead in their drivers, such that it makes DirectX 12 look a lot less impressive, particularly on high end systems with good CPUs and fast GPUs. Anyway, not excited at all about Sniper Elite 4 having DirectX 12 support. I suspect initially that DirectX 11 will prove to be more stable and offer better framerates on my PC. We'll see. Might even buy the game for my PS4 Pro actually rather than PC.
data/avatar/default/avatar34.webp
It's all great that DX12 will be supported at launch but will it do anything? Most games that have DX12 right now either run worse or exactly the same. There a few titles that improve performance a bit on AMD cards, but do nothing on Nvidia cards. As it stands, DX12 is not the revolutionary tech we all expected it to be, and publishers/devs announcing DX12 support is more of for marketing hype than anything else. On the other hand, Vulkan is much more exciting, if they announced Vulkan support I'd be on board with it, but at the moment, DX12, meh.
I agree 100%. I feel like we've all been duped and it's an elephant in the room that no one (as in hardware reviewers as well as users) is really talking about. The promise alluded to by MS and hardware sites was for reduced draw calls leading to improved performance, but so far this has been negligible and even reduced in some cases, whether the user has an AMD or Nvidia card. So, is it the developers fault as they are unable to program games to utilise the benefits of DX12 effectively, or is it that the gains are not all they're cracked up to be? It would be great if sites like Hilbert's could provide further analysis. In saying this I'm sure some others here probably have a better understanding than I do.
https://forums.guru3d.com/data/avatars/m/239/239932.jpg
12 gave me better minimum fps on my i7 920 in tomb raider. Depends on the studio I guess. There's forza horizon 3 as well but the idiots chose to protect the game by realtime decryption so there goes all the performance benefits.
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
You know one time I would have been excited at the announcement of a game getting DirectX 12 support but that was before its release when it was hyped as the Second Coming for games. Here and now the reality is that it is, IMO, a disappointing API offering little or no improvement over DirectX 11 (and in some cases worst performance or fewer graphics options as is the case with Rise of the Tomb Raider which loses VXAO support) and certainly no enhanced visual effects. It's hard not to feel underwhelmed by it to be honest. Mind you, I feel exactly the same way about Vulkan which in DOOM actually runs worst on my PC than OpenGL at maxed out 4K settings and certainly feels, for want of a better word, laggier in play. I guess the truth is that NVIDIA enhanced the hell out of DirectX 11, certainly compared with AMD who are generally considered to have a higher overhead in their drivers, such that it makes DirectX 12 look a lot less impressive, particularly on high end systems with good CPUs and fast GPUs. Anyway, not excited at all about Sniper Elite 4 having DirectX 12 support. I suspect initially that DirectX 11 will prove to be more stable and offer better framerates on my PC. We'll see. Might even buy the game for my PS4 Pro actually rather than PC.
I agree 100%. I feel like we've all been duped and it's an elephant in the room that no one (as in hardware reviewers as well as users) is really talking about. The promise alluded to by MS and hardware sites was for reduced draw calls leading to improved performance, but so far this has been negligible and even reduced in some cases, whether the user has an AMD or Nvidia card. So, is it the developers fault as they are unable to program games to utilise the benefits of DX12 effectively, or is it that the gains are not all they're cracked up to be? It would be great if sites like Hilbert's could provide further analysis. In saying this I'm sure some others here probably have a better understanding than I do.
As an API, DX12 is surely better. I'm not sure that any NVIDIA GPU before Volta is really suited for it though. The draw calls performance is definitely real, and the API is much faster in any use case that involves heavy reliance on the CPU (like emulation, try Dolphin with and without DX12 and you'll know what I mean). It was also completely necessary to develop it in a lower level, due to the massive cruft accrued to the tradition DX11 drivers, which had reached such a level that it was impossible for developers to know what was the "fast path" through the driver for certain operations. Read this post, it's quite enlightening on the reasons why the DX11 situation cannot go for long. [spoiler]
Many years ago, I briefly worked at NVIDIA on the DirectX driver team (internship). This is Vista era, when a lot of people were busy with the DX10 transition, the hardware transition, and the OS/driver model transition. My job was to get games that were broken on Vista, dismantle them from the driver level, and figure out why they were broken. While I am not at all an expert on driver matters (and actually sucked at my job, to be honest), I did learn a lot about what games look like from the perspective of a driver and kernel. * The first lesson is: Nearly every game ships broken. We're talking major AAA titles from vendors who are everyday names in the industry. In some cases, we're talking about blatant violations of API rules - one D3D9 game never even called BeginFrame/EndFrame. Some are mistakes or oversights - one shipped bad shaders that heavily impacted performance on NV drivers. These things were day to day occurrences that went into a bug tracker. Then somebody would go in, find out what the game screwed up, and patch the driver to deal with it. There are lots of optional patches already in the driver that are simply toggled on or off as per-game settings, and then hacks that are more specific to games - up to and including total replacement of the shipping shaders with custom versions by the driver team. Ever wondered why nearly every major game release is accompanied by a matching driver release from AMD and/or NVIDIA? There you go. * The second lesson: The driver is*gigantic. Think 1-2 million lines of code dealing with the hardware abstraction layers, plus another million per API supported. The backing function for Clear in D3D 9 was close to a thousand lines of just*logic*dealing with how exactly to respond to the command. It'd then call out to the correct function to actually modify the buffer in question. The level of complexity internally is enormous and winding, and even inside the driver code it can be tricky to work out how exactly you get to the fast-path behaviors. Additionally the APIs don't do a great job of matching the hardware, which means that even in the best cases the driver is covering up for a LOT of things you don't know about. There are many, many shadow operations and shadow copies of things down there. * The third lesson: It's unthreadable. The IHVs sat down starting from maybe circa 2005, and built tons of multithreading into the driver internally. They had some of the best kernel/driver engineers in the world to do it, and literally thousands of full blown real world test cases. They squeezed that system dry, and within the existing drivers and APIs it is impossible to get more than trivial gains out of any application side multithreading. If Futuremark can only get 5% in a trivial test case, the rest of us have no chance. * The fourth lesson: Multi GPU (SLI/CrossfireX) is*****ing complicated. You cannot begin to conceive of the number of failure cases that are involved until you see them in person. I suspect that more than half of the total software effort within the IHVs is dedicated*strictly to making multi-GPU setups work with existing games. (And I don't even know what the hardware side looks like.) If you've ever tried to independently build an app that uses multi GPU - especially if, god help you, you tried to do it in OpenGL - you may have discovered this insane rabbit hole. There is ONE fast path, and it's the narrowest path of all. Take lessons 1 and 2, and magnify them enormously.* * Deep breath. * Ultimately, the new APIs are designed to cure all four of these problems. * Why are games broken? Because the APIs are complex, and validation varies from decent (D3D 11) to poor (D3D 9) to catastrophic (OpenGL). There are lots of ways to hit slow paths without knowing anything has gone awry, and often the driver writers already know what mistakes you're going to make and are dynamically patching in workarounds for the common cases. * Maintaining the drivers with the current wide surface area is tricky. Although AMD and NV have the resources to do it, the smaller IHVs (Intel, PowerVR, Qualcomm, etc) simply cannot keep up with the necessary investment. More importantly, explaining to devs the correct way to write their render pipelines has become borderline impossible. There's too many failure cases. it's been understood for quite a few years now that you cannot max out the performance of any given GPU without having someone from NVIDIA or AMD physically grab your game source code, load it on a dev driver, and do a hands-on analysis. These are the vanishingly few people who have actually seen the source to a game, the driver it's running on, and the Windows kernel it's running on, and the full specs for the hardware. Nobody else has that kind of access or engineering ability. * Threading is just a catastrophe and is being rethought from the ground up. This requires a lot of the abstractions to be stripped away or retooled, because the old ones required too much driver intervention to be properly threadable in the first place. * Multi-GPU is becoming explicit. For the last ten years, it has been AMD and NV's goal to make multi-GPU setups completely transparent to everybody, and it's become clear that for some subset of developers, this is just making our jobs harder. The driver has to apply imperfect heuristics to guess what the game is doing, and the game in turn has to do peculiar things in order to trigger the right heuristics. Again, for the big games somebody sits down and matches the two manually.* * Part of the goal is simply to stop hiding what's actually going on in the software from game programmers. Debugging drivers has never been possible for us, which meant a lot of poking and prodding and experimenting to figure out exactly what it is that is making the render pipeline of a game slow. The IHVs certainly weren't willing to disclose these things publicly either, as they were considered critical to competitive advantage. (Sure they are guys. Sure they are.) So the game is guessing what the driver is doing, the driver is guessing what the game is doing, and the whole mess could be avoided if the drivers just wouldn't work so hard trying to protect us. * So why didn't we do this years ago? Well, there are a lot of politics involved (cough Longs Peak) and some hardware aspects but ultimately what it comes down to is the new models are hard to code for. Microsoft and ARB never wanted to subject us to manually compiling shaders against the correct render states, setting the whole thing invariant, configuring heaps and tables, etc. Segfaulting a GPU isn't a fun experience. You can't trap that in a (user space) debugger. So ... the subtext that a lot of people aren't calling out explicitly is that this round of new APIs has been done*in cooperation with the big engines. The Mantle spec is effectively written by Johan Andersson at DICE, and the Khronos Vulkan spec basically pulls Aras P at Unity, Niklas S at Epic, and a couple guys at Valve into the fold. * Three out of those four just made their engines public and free with minimal backend financial obligation. * Now there's nothing*wrong with any of that, obviously, and I don't think it's even the big motivating raison d'etre of the new APIs. But there's a very real message that if these APIs are too challenging to work with directly, well the guys who designed the API also happen to run very full featured engines requiring no financial commitments*. So I think that's served to considerably smooth the politics involved in rolling these difficult to work with APIs out to the market, encouraging organizations that would have been otherwise reticent to do so. [Edit/update] I'm definitely not suggesting that the APIs have been made artificially difficult, by any means - the engineering work is solid in its own right. It's also become clear, since this post was originally written, that there's a commitment to continuing DX11 and OpenGL support for the near future. That also helped the decision to push these new systems out, I believe. * The last piece to the puzzle is that we ran out of new user-facing hardware features many years ago. Ignoring raw speed, what exactly is the user-visible or dev-visible difference between a GTX 480 and a GTX 980? A few limitations have been lifted (notably in compute) but essentially they're the same thing. MS, for all practical purposes, concluded that DX was a mature, stable technology that required only minor work and mostly disbanded the teams involved. Many of the revisions to GL have been little more than API repairs. (A GTX 480 runs full featured OpenGL 4.5, by the way.) So the reason we're seeing new APIs at all stems fundamentally from Andersson hassling the IHVs until AMD woke up, smelled competitive advantage, and started paying attention. That essentially took a three year lag time from when we got hardware to the point that compute could be directly integrated into the core of a render pipeline, which is considered normal today but was bluntly revolutionary at production scale in 2012. It's a lot of small things adding up to a sea change, with key people pushing on the right people for the right things. * * Phew. I'm no longer sure what the point of that rant was, but hopefully it's somehow productive that I wrote it. Ultimately the new APIs are the right step, and they're retroactively useful to old hardware which is great. They will be harder to code. How much harder? Well, that remains to be seen. Personally, my take is that MS and ARB always had the wrong idea. Their idea was to produce a nice, pretty looking front end and deal with all the awful stuff quietly in the background. Yeah it's easy to code against, but it was always a bitch and a half to debug or tune. Nobody ever took that side of the equation into account. What has finally been made clear is that it's*okay to have difficult to code APIs, if the end result just works. And that's been my experience so far in retooling: it's a pain in the ass, requires widespread revisions to engine code, forces you to revisit a lot of assumptions, and generally requires a lot of infrastructure before*anything works. But once it's up and running, there's no surprises. It works smoothly, you're always on the fast path, anything that IS slow is in your OWN code which can be analyzed by common tools. It's worth it. * (*See this post by Unity's Aras P for more thoughts. I have a response comment in there as well.)
[/spoiler]
https://forums.guru3d.com/data/avatars/m/232/232349.jpg
Well HOT DAMN PrMinisterGR........ Whoa! Never knew that much about it all....., but kind of figured it was similar. No one ever wants to admit how F'd up things truly are. When all we want to do is just enjoy them luscious graphics, non scripted blood splatters all the while at a blisteringly fast 100+fps!.!
https://forums.guru3d.com/data/avatars/m/68/68055.jpg
Let's face it, it boils down to shooting Hitler's balls off, so what new offers DX12 in that regard?
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
Read this post, it's quite enlightening on the reasons why the DX11 situation cannot go for long. [spoiler][/spoiler]
Great insight you posted there, much appreciated. Sadly, if the problems have been of such a huge impact with broken game code that violates APIs and crappy shaders coming from game devs, I still don't think making them program dx12 now (and even mGPU) will improve things for anybody but m$ who doesn't have to hire programmer teams that much anymore. Bad dev programming, then handing them more and more important work, it's like asking for a second kick to the crown jewels.
https://forums.guru3d.com/data/avatars/m/209/209146.jpg
Figures, then again that's how it looked like from the first screenshots and videos so not too surprising though the X-Ray "kill" camera has been tweaked a bit further. (Probably for both vehicles and human targets.) Hell the pre-order-for-Hitler mission bonus is still pretty much the same only with a different map and that's been there since V.2 😀 Heh probably going to have a assortment of weapon DLC too with little to no difference to the default selection besides some camo differences and I guess for some reason or another the guns you'd expect to see in this setting might end up there too. (The Kar98k was one of them for V.3 if I remember correctly.) EDIT: There's probably some enhancements to the game itself too of course, V.3 added those longer-distance targets even if the main ones were set up specifically for such shots. No idea about visuals or engine changes, DX12 is probably going to be mainly for performance although the upcoming RS2 / Creator update for Win10 should enable Shader Model 6.0 though as for what that will allow for and when games will actually use it that's still a unknown I guess. (DirectX 11.x is still active too, 11.4 now with RS1 though I'm not sure if any games are using that or even 11.3 or 11.2 though Frostbite and a few other engines do support 11.1 at least.) GPU driver differences between AMD and Nvidia also makes comparisons a bit more problematic, AMD far as I'm aware still has a bit of a CPU overhead issue with D3D11 and their OGL implementation is also a bit behind so it looks like they're seeing major gains in both Vulkan and D3D12 as a result though they have also focused more on these API's and their own earlier Mantle variant. (Compared to D3D11 and OGL4.x) Probably going to be a while still before we see the full advantage of these low-level API's. Unsure but I suppose Rebellion is still teaming up with AMD and that probably means AMD wants to have DX12 support for showcasing their GPU's whereas with Nvidia they might want to have Gameworks features implemented. (Though there's probably more to it than just requests to add this or that feature.)
data/avatar/default/avatar32.webp
It's all great that DX12 will be supported at launch but will it do anything? Most games that have DX12 right now either run worse or exactly the same. There a few titles that improve performance a bit on AMD cards, but do nothing on Nvidia cards. As it stands, DX12 is not the revolutionary tech we all expected it to be, and publishers/devs announcing DX12 support is more of for marketing hype than anything else. On the other hand, Vulkan is much more exciting, if they announced Vulkan support I'd be on board with it, but at the moment, DX12, meh.
Well, I have to eat my own words here, early reviews show that DX12 actually works (and by works I mean tangible performance gains, and not just and option that you enable but doesn't do anyhting), and it works for both AMD and Nvidia. Color me impressed, and in cases like these I dont mind being proven wrong. Kudos.