Quantum Break coming to PC - DirectX 12 only - Screenshots - Specs

Published by

Click here to post a comment for Quantum Break coming to PC - DirectX 12 only - Screenshots - Specs on our message forum
https://forums.guru3d.com/data/avatars/m/34/34795.jpg
Feel sorry for those with a 970 and 980. Now nvidia has screwed you guys over. Welcome to the 700 series.
How has Nvidia screwed us over? ..
I fail to see this as well...
Maxwell is still missing a couple of DX12 features, like Asynchronous Shaders, which Nvidia are trying to workaround with drivers. AMD wins hands down on feature level, but that doesn't necessarily translate into better performance. No, where you guys will be screwed is with GameWorks, because Nvidia's slogan seems to be "Buy a GPU every year"™. I don't care what team you root for, you gotta give credit to AMD for GCN. A 7970 with 6GB still holds up quite well today. Meanwhile, the 680...
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
You can't compare FLOPs unless two GPUs have the exact same architecture. And okay, I can get worse textures than XBox One and a slightly better framerate at the same settings thanks to excessive use of tessellation and no option for the selective tessellation of the XBox One version. Cool. By the way, there's nothing wrong with Fiji; it's because you need more than 8GB VRAM to use ultra textures even though XBox One can do it with 6GB total memory for the game.
You get double the framerate, better textures and better shadows. http://www.pcper.com/reviews/Graphics-Cards/Rise-Tomb-Raider-AMD-and-NVIDIA-Performance-Results/Feb-5th-Patch-and-Multi-G There was something wrong with Fiji. The textures are clearly better on PC then they are on Xbox version of the game. There are obviously diminishing returns in detail as texture resolution (and size) increase. In fact this goes for just about everything. Which is why you can't just expect hardware that's twice as fast to have games that look twice as good.
https://forums.guru3d.com/data/avatars/m/254/254800.jpg
Maxwell is still missing a couple of DX12 features, like Asynchronous Shaders, which Nvidia are trying to workaround with drivers. AMD wins hands down on feature level, but that doesn't necessarily translate into better performance. No, where you guys will be screwed is with GameWorks, because Nvidia's slogan seems to be "Buy a GPU every year"™. I don't care what team you root for, you gotta give credit to AMD for GCN. A 7970 with 6GB still holds up quite well today. Meanwhile, the 680...
Yup, though it's not good for their bottom line imo
data/avatar/default/avatar21.webp
dx 12 and cpu overhead? yeah no
https://forums.guru3d.com/data/avatars/m/254/254132.jpg
dx 12 and cpu overhead? yeah no
Huh?
data/avatar/default/avatar02.webp
Time to get to work with that DirectX 12 driver for Fermi.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Maxwell is still missing a couple of DX12 features, like Asynchronous Shaders, which Nvidia are trying to workaround with drivers. AMD wins hands down on feature level, but that doesn't necessarily translate into better performance. No, where you guys will be screwed is with GameWorks, because Nvidia's slogan seems to be "Buy a GPU every year"™. I don't care what team you root for, you gotta give credit to AMD for GCN. A 7970 with 6GB still holds up quite well today. Meanwhile, the 680...
How does Gameworks force me to buy a new GPU every year? https://tpucdn.com/reviews/Sapphire/R9_390_Nitro/images/perfrel_1920_1080.png The 780Ti still seems to be neck and neck with a 290x and the 780Ti came out in 2013, 3 years ago. And yeah, older AMD cards are doing better than older Nvidia cards, but that's mostly because of what you said, the memory. They all had more VRAM which is becoming a huge factor in newer games. Look at anandtech's benchmark 2015 between the 680 and the 7970. It literally looks like the release benchmarks in 2012 with the exception of higher resolution games and Shadows of Mordor due to the VRAM limitation. I don't see how Nvidia screwed anyone. When I bought my 690 I knew it had 2GB of vram. I knew it was going to be an issue as games with higher res textures came out. I essentially sidegraded to my 980 and saw huge gains in games with VRAM limitations.
https://forums.guru3d.com/data/avatars/m/260/260317.jpg
its interesting they say 6gb vram and a fury x , last time i looked the fury x has 4gb vram
data/avatar/default/avatar35.webp
You get double the framerate, better textures and better shadows. http://www.pcper.com/reviews/Graphics-Cards/Rise-Tomb-Raider-AMD-and-NVIDIA-Performance-Results/Feb-5th-Patch-and-Multi-G There was something wrong with Fiji. The textures are clearly better on PC then they are on Xbox version of the game. There are obviously diminishing returns in detail as texture resolution (and size) increase. In fact this goes for just about everything. Which is why you can't just expect hardware that's twice as fast to have games that look twice as good.
I don't get anywhere close to double the frame rate (it's often in the 40s), and I get worse textures because XBox One uses ultra textures and I can only manage high without extreme stuttering and a huge drop in performance. It would be okay if I had a G-Sync monitor, but I'm not spending $500 to permanently lock myself into Nvidia.
https://forums.guru3d.com/data/avatars/m/68/68055.jpg
Hope the will whine about low sales and blame piracy, and not the Win10 exclusivity.
https://forums.guru3d.com/data/avatars/m/260/260317.jpg
How does Gameworks force me to buy a new GPU every year? https://tpucdn.com/reviews/Sapphire/R9_390_Nitro/images/perfrel_1920_1080.png The 780Ti still seems to be neck and neck with a 290x and the 780Ti came out in 2013, 3 years ago. And yeah, older AMD cards are doing better than older Nvidia cards, but that's mostly because of what you said, the memory. They all had more VRAM which is becoming a huge factor in newer games. Look at anandtech's benchmark 2015 between the 680 and the 7970. It literally looks like the release benchmarks in 2012 with the exception of higher resolution games and Shadows of Mordor due to the VRAM limitation. I don't see how Nvidia screwed anyone. When I bought my 690 I knew it had 2GB of vram. I knew it was going to be an issue as games with higher res textures came out. I essentially sidegraded to my 980 and saw huge gains in games with VRAM limitations.
i felt a bit screwed i spend £350 on a gtx 970 4gb and it had 3.5 gb and the rest was slower ram , overclockers u.k did offer me a refund to be fair but still a bit shady from nvidia only put 3.5gb fast ram not the full 4gb
https://forums.guru3d.com/data/avatars/m/34/34795.jpg
How does Gameworks force me to buy a new GPU every year? https://tpucdn.com/reviews/Sapphire/R9_390_Nitro/images/perfrel_1920_1080.png The 780Ti still seems to be neck and neck with a 290x and the 780Ti came out in 2013, 3 years ago. And yeah, older AMD cards are doing better than older Nvidia cards, but that's mostly because of what you said, the memory. They all had more VRAM which is becoming a huge factor in newer games. Look at anandtech's benchmark 2015 between the 680 and the 7970. It literally looks like the release benchmarks in 2012 with the exception of higher resolution games and Shadows of Mordor due to the VRAM limitation. I don't see how Nvidia screwed anyone. When I bought my 690 I knew it had 2GB of vram. I knew it was going to be an issue as games with higher res textures came out. I essentially sidegraded to my 980 and saw huge gains in games with VRAM limitations.
Well, they screwed Kepler users with The Witcher 3 for example. Kepler had been out of specific optimizations since Far Cry 4, which was made a proven fact after all the community outrage and the eventual release of drivers containing optimizations which personally gave me up to ~10fps of improvement on the Witcher 3. Both are GameWorks titles, what a coincidence, huh? And they even had the nerve of not including those optimizations in the first wave of Windows 10 Nvidia drivers, so I had to play The Witcher 3 with crippled performance. Even if you think that isn't screwing anyone, I still count the 970 3584+512 as screwing every 970 owner. The boxes still say 4GB GDDR5. And thank you for the benchmarks you posted. Remember how the 280x was "equivalent" to the 770? And it was cheaper! Now look at it go. Just look how much value the 280x had and how ****ty the 770 is. Look at the benchmarks you posted! But I already made up my mind. I'm going for a 380x or 390 Polaris equivalents when they launch, with a FreeSync monitor. I've switched between AMD and Nvidia throught the years and the only advantage Nvidia always really has is specific features in games which are never really worth the extra premium you pay for Nvidia. I honestly don't care if AMD performs 5 fps worse than a Pascal equivalent, I'm not falling for the bullsh*t again.
https://forums.guru3d.com/data/avatars/m/242/242371.jpg
Bloody hell people still going on about that.... Ps recommended probably isn't for 1080P, more likely 4K. There's a huge gap between min and rec, if it was a bad port then minimum would be higher. Here's a secret that some PC gamers miss, you can infact turn settings down to get better performance, nobody forces you to max them. And another thing, when a game doesn't push the hardware people moan, when a game does push the hardware people moan, can't win...
data/avatar/default/avatar38.webp
Indeed, huge cap between minimum and recommended specs. I'm sure everyone will find settings that are fine for their hardware. And since it's dx12 with low cpu overhead, it's safe to assume from cpu power needed that pc version is gonna have some neat physics etc that console versions don't have. Interesting to see what will happen.
https://forums.guru3d.com/data/avatars/m/201/201426.jpg
You're acting like there's no middleground. GTA V did it perfectly. It scales very well and is exactly as demanding as you need it to be. This looks like it'll just be horribly optimized compared to the XBox One version, likely needing an i7 and an R9 390 to match XBox One graphics and performance.
If you think that it will take that to match the Xbox 1, you are seriously mistaken. An overclocked 1st gen i7 still outdoes the PS4 and Xbox 1. Even my i5 4460 in my game stream server with a gtx 670 4gb does way better. Even runs Rise of the tomb raider all high settings @ 1080p above 40 fps. GTA5 really looks decent but not that great. Car refecltions are on point for the most part, but Pcars and Assetto Corsa beat it easily in that regard. GTA5 played on a Phenom X4 9950 @ 3.1 ghz, 8GB DDR2 800, HD 5770 1GB @ 1440x900 on medium settings and got 35fps avg which was impressive. But I want to see games push our hardware to the limits. Im not gonna be mad my 2 1/2 year old gpus cant max the game out.
https://forums.guru3d.com/data/avatars/m/34/34795.jpg
Bloody hell people still going on about that.... Ps recommended probably isn't for 1080P, more likely 4K. There's a huge gap between min and rec, if it was a bad port then minimum would be higher. Here's a secret that some PC gamers miss, you can infact turn settings down to get better performance, nobody forces you to max them. And another thing, when a game doesn't push the hardware people moan, when a game does push the hardware people moan, can't win...
Things aren't black and white. Unless you somehow consider that XCOM2 is better looking than Rise of the Tomb Raider and the Witcher 3. Because it really performs worse than those two and it's running on Unreal Engine 3. Or when you have crap performance in some game at minimum settings and yet your GPU+CPU usage is at 50%. What people want is balance, and a game that scales well, and that's becoming increasingly rare. Rise of the Tomb Raider is the perfect counter to your argument. It's a demanding game that really just doesn't scales that well. Between medium and high, I get a difference of about 5 fps for much worse graphical quality. (Tesselation disabled on both, because Maxwell, of course). Turn the graphics down to low, and the game won't reach 60fps and yet it looks worse than Tomb Raider. So what's the point of lowering visuals, when the game doesn't scale properly?
https://forums.guru3d.com/data/avatars/m/260/260826.jpg
Microsoft is finally starting the DX12 gaming era on PC pushing his Xbox One "exclusives" AAA titles Fable legends and Quantum Break to PC with DX12. AotS is the only DX12 game until now but it's not an AAA and a lot of people consider it simply an expensive AMD showcase. Fable legends benchmark showed a very different DX12 results for AMD in a future DX12 AAA title, DX12 results are clearly better for Nvidia in this game. Some ppl want to believe DX12 will magically shift the GPU lead to AMD erasing his DX11 driver performance problem (?) but i think Nvidia leverage on game publisher/devs and his better gaming support will remain.
https://forums.guru3d.com/data/avatars/m/55/55855.jpg
Recommended card from AMD should be the 390X, as the FX hasn't got enough Vram.
https://forums.guru3d.com/data/avatars/m/260/260826.jpg
Recommended card from AMD should be the 390X, as the FX hasn't got enough Vram.
Yep, AMD worst error with Fury X is the use of HBM1 who is limited to 4 GB VRAM.A 1440/4K gaming will need more VRAM. ROTTR already need 6 GB VRAM for 1440 and very high...and for 1080/60 390X is enough. VRAM requirements in AAA titles will not decrease.
data/avatar/default/avatar16.webp
Yep, AMD worst error with Fury X is the use of HBM1 who is limited to 4 GB VRAM.A 1440/4K gaming will need more VRAM. ROTTR already need 6 GB VRAM for 1440 and very high...and for 1080/60 390X is enough. VRAM requirements in AAA titles will not decrease.
Fury series was mostly a HBM test. They don't even benefit that much of the bandwidth as memory controllers can't take all out of them anyways. (Should be 512GB/s, but can reach ~370GB/s in real because of memory controllers.