Review: Star Wars Battlefront II PC graphics performance analysis

Published by

Click here to post a comment for Review: Star Wars Battlefront II PC graphics performance analysis on our message forum
data/avatar/default/avatar13.webp
haha drm stop buying ea games!!!
https://forums.guru3d.com/data/avatars/m/254/254969.jpg
Silva:

Another game even the press can't benchmark because of stupid DRM. Will not buy. Thanks for the hard work HH!
I think the DRM is smallest problem this game have.
data/avatar/default/avatar20.webp
Why Nvidia can not get the same performance with dx12 but even lesser performance, and with other DX12 games Nvidia get the seme or better performance. It smells fishy
data/avatar/default/avatar33.webp
Another game that performs better with DirectX 11. I wonder why some developers still insist on wasting time and money for implementing low level APIs despite all the problems they bring to PC games.
https://forums.guru3d.com/data/avatars/m/260/260103.jpg
Fury-X still performs pretty well in this game, even at higher resolutions.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Keesberenburg:

Why Nvidia can not get the same performance with dx12 but even lesser performance, and with other DX12 games Nvidia get the seme or better performance. It smells fishy
With DX11 Nvidia controls like 90% of the work being done on the GPU. This allows them to tweak the dispatch queues and everything to maximize the full use of their pipeline. With DX12 the majority is entirely on the developer. Nvidia's entire architecture is essentially based on that dispatch/scheduler too - the magic that's allowed them to keep a performance/power lead over AMD with significantly smaller scheduler occurs entirely in the software/driver stack. Some devs are going to deep dive into Nvidia's architecture and try to optimize performance, other devs are going to phone in it - hit some internal threshold (Example: "We want to hit 1080Ti @ 4K @ 60fps average on our internal benchmark) then stop optimizing. It's not like further optimization is going to significantly increase sales, a majority of their customers don't even have access to DX12, and the nature of optimization is typically exponential in terms of time required. You can spend like 100-200 man hours getting 20 fps out of an architecture but the next 20fps might require thousands of hours. AMD on the other hand is in every xbox - so most devs are far more familiar with their architecture. Being in the consoles it also requires them to spend more time in general optimizing and since the code path for AMD/Nvidia is split at some point, AMD obviously will get way more attention, which bleeds over to their GPU's in the PC space. The full benefits of DX12 won't come until SM6.0 is out and DX11 is completely eliminated from the development scene. By that time you won't even recognize the benefits because you won't have DX11 variants of the games to compare them to.
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
Nima V:

Another game that performs better with DirectX 11. I wonder why some developers still insist on wasting time and money for implementing low level APIs despite all the problems they bring to PC games.
Only Intel bosses believe that development should stop and things remain the same forever. I don't understand people who want us to be using dx11 still in 2030. Just like Intel wanted us to be using quad cores still in 2030. What would be the point of dx12 if it was the same as dx11, only with some superficial changes? The current GPU architectural pathway already reached the end of the road anyway. Gigantic monolithic designs aren't viable past a certain point. Things need to change, both in hardware and in software. It'll just take its time. Have a little faith.
data/avatar/default/avatar32.webp
4K benchmarks with a 980ti OC?
https://forums.guru3d.com/data/avatars/m/115/115462.jpg
After some more testing I came to the conclusion that Ultra shadows look a lot better than HTFS/PCSS for some reason. They also alow me to bump the resolution scaling to 150% @ 1440p 60 FPS vsynced, everything else maxed. I do get however get some drops in 50s in some multiplayer scenarios, but it looks so damn good that I can live with it so far, If I'd drop the scaling to 135-140% it would likely be 100% 60 FPS all the time. Before this I played maxed with HTFS shadows and just 100% scaling, it looked a LOT worse then. o_O
https://forums.guru3d.com/data/avatars/m/163/163032.jpg
there is little difference between 4, 6 and 8 cores. Would this mean the game may only be dual threaded?
https://forums.guru3d.com/data/avatars/m/201/201426.jpg
Clawedge:

there is little difference between 4, 6 and 8 cores. Would this mean the game may only be dual threaded?
No, its just single player doesnt rely on CPU that much at all. Online, you would be the difference far more. BF1 thrashes my cpu online, but barely uses half of it in single player. And the beta for this, used all 8 threads quite evenly and usually was around 85% all the time. The one great thing about any dice frostbite engine since Bad Company 2 on PC, they have been massively multi threaded.
data/avatar/default/avatar22.webp
Denial:

With DX11 Nvidia controls like 90% of the work being done on the GPU. This allows them to tweak the dispatch queues and everything to maximize the full use of their pipeline. With DX12 the majority is entirely on the developer. Nvidia's entire architecture is essentially based on that dispatch/scheduler too - the magic that's allowed them to keep a performance/power lead over AMD with significantly smaller scheduler occurs entirely in the software/driver stack. Some devs are going to deep dive into Nvidia's architecture and try to optimize performance, other devs are going to phone in it - hit some internal threshold (Example: "We want to hit 1080Ti @ 4K @ 60fps average on our internal benchmark) then stop optimizing. It's not like further optimization is going to significantly increase sales, a majority of their customers don't even have access to DX12, and the nature of optimization typically exponential in terms of time required. You can spend like 100-200 man hours getting optimizing 20 fps out of an architecture but the next 20fps might require thousands of hours. AMD on the other hand is in every xbox - so most devs are far more familiar with their architecture. Being in the consoles it also requires them to spend more time in general optimizing and since the code path for AMD/Nvidia is split at some point, AMD obviously will get way more attention, which bleeds over to their GPU's in the PC space. The full benefits of DX12 won't come until SM6.0 is out and DX11 is completely eliminated from the development scene. By that time you won't even recognize the benefits because you won't have DX11 variants of the games to compare them to.
Direct Disaster 12, if Nvidia getting lower performance with DX12 enabled DX12 is not ready on this moment and it's unfair.
https://forums.guru3d.com/data/avatars/m/220/220214.jpg
Considering what DX12 was originally meant to do (remove the bottleneck of draw calls and allow magnitudes more draw calls than DX11), and the demos for it at the time, I am extremely unimpressed by its performance improvement over DX11 in almost everything in last few years. Why are game engines not taking advantage of the massive increase in draw calls? I don't get it.
https://forums.guru3d.com/data/avatars/m/224/224952.jpg
geogan:

Considering what DX12 was originally meant to do (remove the bottleneck of draw calls and allow magnitudes more draw calls than DX11), and the demos for it at the time, I am extremely unimpressed by its performance improvement over DX11 in almost everything in last few years. Why are game engines not taking advantage of the massive increase in draw calls? I don't get it.
Because .. . .. .. ALIENS !!!
data/avatar/default/avatar28.webp
I see the latest NVidia drivers provide SLI compatibility, Hilbert can you test two 1080i's to demonstrate how well this game scales up with two cards? The 4K results just prove there's no point in buying a 4K monitor at the moment as even if there was a 100Hz or better 4K monitor, Nvidia's best cannot provide the refresh rate. I have a perfectly OK Dell U2711 monitor but I bought it 7 years ago and feel frustrated that I'm being 'blocked' by poor card performance from upgrading. I know this would be more work for you but is it more interesting for us to have 3440 x 1440 (or thereabouts) monitor results? Is the widescreen 100Hz not a more viable gaming monitor than a 60Hz 4K one? Who would buy a 60Hz 4K monitor when the next gen NVidia TOTL cards should finally provide enough power for a 4K screen with a 100Hz refresh rate, well at least the Ti version will hopefully do so. Please don't tell me that you want a 4K 165Hz monitor as you'll be dead and buried by the time NVidia gives us the card to do this.
https://forums.guru3d.com/data/avatars/m/16/16662.jpg
Administrator
ChisChas:

I see the latest NVidia drivers provide SLI compatibility, Hilbert can you test two 1080i's to demonstrate how well this game scales up with two cards? The 4K results just prove there's no point in buying a 4K monitor at the moment as even if there was a 100Hz or better 4K monitor, Nvidia's best cannot provide the refresh rate. I have a perfectly OK Dell U2711 monitor but I bought it 7 years ago and feel frustrated that I'm being 'blocked' by poor card performance from upgrading. I know this would be more work for you but is it more interesting for us to have 3440 x 1440 (or thereabouts) monitor results? Is the widescreen 100Hz not a more viable gaming monitor than a 60Hz 4K one? Who would buy a 60Hz 4K monitor when the next gen NVidia TOTL cards should finally provide enough power for a 4K screen with a 100Hz refresh rate, well at least the Ti version will hopefully do so. Please don't tell me that you want a 4K 165Hz monitor as you'll be dead and buried by the time NVidia gives us the card to do this.
I had that planned, but at a pace of 4 hardware changes per day allowed I am slowly losing my patience. I still need to test four cards and then three setups with different procs and am already locked out of the game for today until tomorrow.
data/avatar/default/avatar40.webp
Kaarme:

Only Intel bosses believe that development should stop and things remain the same forever. I don't understand people who want us to be using dx11 still in 2030. Just like Intel wanted us to be using quad cores still in 2030. What would be the point of dx12 if it was the same as dx11, only with some superficial changes? The current GPU architectural pathway already reached the end of the road anyway. Gigantic monolithic designs aren't viable past a certain point. Things need to change, both in hardware and in software. It'll just take its time. Have a little faith.
Don't get me wrong. I don't want PC games to remain on DX11 forever, I just can't see what's the point of DX12. improvement in performance? better visuals? less work or cost for developers to implement? what benefit DX12 had for PC games? In my opinion this useless API had enough time to show what it can do and now it's time for a new API. maybe Microsoft should think about an improved version of DX11.
https://forums.guru3d.com/data/avatars/m/206/206288.jpg
Nima V:

Don't get me wrong. I don't want PC games to remain on DX11 forever, I just can't see what's the point of DX12. improvement in performance? better visuals? less work or cost for developers to implement? what benefit DX12 had for PC games? In my opinion this useless API had enough time to show what it can do and now it's time for a new API. maybe Microsoft should think about an improved version of DX11.
He pretty much means that DX12 and the like are long term solutions. Repeating myself from another thread, but the API has already shown what it can do in synthetic benchmarks. Just like everything in PC gaming (uncapped frame rates, multicore support. MGPU, ultrawide etc...) it's entirely up to the developer to take advantage of it.
https://forums.guru3d.com/data/avatars/m/226/226864.jpg
The new 388.31 drivers do not further optimize GeForce graphics card performance, aside from the 1080/1080Ti and TitanXp in CPU bound 1920x1080, these numbers have been updated.
@Hilbert Hagedoorn Great tests, as always. Have you tested with Titan Xp or Titan X (Pascal)? The charts say Titan X (Pascal) while you mentioned Titan Xp in the above quote. Which one have you actually tested with and do the driver improvements apply for both or just one?
data/avatar/default/avatar37.webp
Radeon 17.11.2 drivers do not bring extra performance to this game.
Why am I not surprised.