Radeon RX 480 for 1080p60 will be plenty for The Division 2

Published by

Click here to post a comment for Radeon RX 480 for 1080p60 will be plenty for The Division 2 on our message forum
https://forums.guru3d.com/data/avatars/m/236/236670.jpg
well that's good news for everyone ....I have a rx480 (4gb though:() in my htpc ....it is still just as fast as my 1060 6gb and sometimes faster/ more fluid.
https://forums.guru3d.com/data/avatars/m/147/147322.jpg
I'm glad they fixed DX12 crashes I had during closed/open beta. And it's noticeably faster than DX11.
https://forums.guru3d.com/data/avatars/m/273/273838.jpg
I played (or tried to play) the open beta 10 days ago. Most times I couldn't even get past the menu. The game would crash, and I did sent a few of the reports to the devs. A few hours before the end of the open beta, I lowered the settings and was able to play very smoothly. I'm on a Ryzen 7 1700 with an MSI RX480 Gaming X, playing on a 2560x1080 ultrawide freesync monitor. I've had problems while gaming before, so It's probably my system that caused the crashes. I liked what I saw.
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
Not bad. Although I'm not too impressed with min / optimal / max specs, and that a 480 can drive 1080p medium settings with 60fps also is not surprising. The 480's a good card, but what's the big deal? I don't get it tbh. I'm more surprised they think that a 1660TI will be running this game at 1440p/60/high details.... for that to believe it, I would want to wait until I see benchmarks for this.
data/avatar/default/avatar39.webp
There is a big difference between the Radeon VII and the 2080Ti. If they're advising Radeon VII surely they should be mentioning 1080Ti or 2080, not the 2080Ti...
data/avatar/default/avatar35.webp
Yes and a big FU to multi gpu users with 4K monitors who like more than 30 FPS! Thanks a million!
data/avatar/default/avatar02.webp
spajdrik:

I'm glad they fixed DX12 crashes I had during closed/open beta. And it's noticeably faster than DX11.
Unfortunately not completely, the white screen bug is still there, the brightness goes randomly crazy
https://forums.guru3d.com/data/avatars/m/106/106401.jpg
no_1_dave:

There is a big difference between the Radeon VII and the 2080Ti. If they're advising Radeon VII surely they should be mentioning 1080Ti or 2080, not the 2080Ti...
Look at VRam requirement, looks like 8GB is the limiting factor for 4K max with RTX 2080.
https://forums.guru3d.com/data/avatars/m/172/172560.jpg
Good news is, the game is so slow paced, 30fps is plenty. That's about what I got with 570 on high details, before I uninstalled the junk.
https://forums.guru3d.com/data/avatars/m/156/156133.jpg
Moderator
I've got a spare one that I've been tweaking with, Polaris is very fun to tweak actually! Currently have it stable at 1400/2200.
https://forums.guru3d.com/data/avatars/m/258/258688.jpg
Since multi-GPU support (formerly Crossfire/SLI) has been officially integrated into D3d12, as opposed to being supported only by 3d IHV custom add-on driver packages outside the API in D3d11 & earlier--why are so-called "enthusiast" gaming sites ignoring it these days? If anything, it should be getting more attention as opposed to less, now that it's no longer the red-headed stepchild it used to be. When blockbuster games like Shadow of the Tomb Raider support it--and even have back-supported d3d12 multi-GPU support to Rise of the Tomb Raider--why is Guru 3d completely ignoring it? I'm certainly asking because I am surprised how well Crossfire and multi-GPU support work these days--had no idea how easy it is until I tried it with my RX-590/480 8GB setup back in December. Also, the AMD drivers all carry the old custom Crossfire profiles in every driver release. Works really well @ 3840x2160, btw (which is what I was hoping for when I bought it.) The D3d12 ShadowoftTomb Raider in-game benchmark, for instance, gets a ~95% scaling increase in framerates @ 3840x2160 over the RX-590 by itself, 95% of the eye-candy on (from a 33fps average 590 only to ~61fps 590/480 multi-GPU average--with peaks > 100 fps)--which I think is fantastic! It's an especially nice option for people who already own an RX-480 8GB and even a 580 8GB--I wouldn't recommend people go out and buy 2 590's at once, of course, but hundreds of thousands of people already own a 480/580 8GB, purchased a year or more ago, and for them the RX-590/4-580 8GB Multi-GPU option is a no brainer--if performance and value for the dollar motivates them. I'm very happy with the setup, atm. It's so easy to do, for instance, because now you can turn off Crossfire support directly in an individual game profile--or turn it on--no more rebooting and so on, and the requirement for matching MHz frequencies is a thing of the past (RX-590 @ 1.6GHz, RX-480 @ 1.305GHz), I was also glad to see! It's as near transparent as it can be--certainly an order of magnitude better than it was when I last tried Crossfire many years ago with twin 4850's! It would really be nice to see this information posted about new D3d12 game releases--I'm expecting to get the Div 2 any day now as the last game due me for the RX-590 purchase (Already have DMC5 and RE 2). Be nice to know if Div 2 was a multi-GPU title like Tomb-Raider, though! Someone has already said that frame-rate wasn't important in this game--which is fine--but I just like knowing these things, ya' know? It's just good information to have.
https://forums.guru3d.com/data/avatars/m/55/55855.jpg
LOL @ Win 7 only for 1080p60, when Metro Exodus said Win 7 only for 1080p60 too, with low settings, when it runs exactly the same in 12 as it does in 11, and im running it @ 3440x1440 at Ultra on Win 7, and getting around 50/70fps 😛 EDIT: If you go to the actual game site to look at the specs, you get Win 7 n Dx11 for all. 😀 https://store.ubi.com/on/demandware.static/-/Library-Sites-shared-library-web/default/dwc6a5a99c/images/landings/TCTD2/media/division2-compare-specs.jpg
https://forums.guru3d.com/data/avatars/m/172/172560.jpg
do you have surface optimization ON in radeon settings Rich? Or did you put it to off to get rid of FP12 shaders and get back to FP16 as it is default (FP16 also default for nV cards)?
https://forums.guru3d.com/data/avatars/m/236/236670.jpg
waltc3:

Since multi-GPU support (formerly Crossfire/SLI) has been officially integrated into D3d12, as opposed to being supported only by 3d IHV custom add-on driver packages outside the API in D3d11 & earlier--why are so-called "enthusiast" gaming sites ignoring it these days? If anything, it should be getting more attention as opposed to less, now that it's no longer the red-headed stepchild it used to be. When blockbuster games like Shadow of the Tomb Raider support it--and even have back-supported d3d12 multi-GPU support to Rise of the Tomb Raider--why is Guru 3d completely ignoring it? I'm certainly asking because I am surprised how well Crossfire and multi-GPU support work these days--had no idea how easy it is until I tried it with my RX-590/480 8GB setup back in December. Also, the AMD drivers all carry the old custom Crossfire profiles in every driver release. Works really well @ 3840x2160, btw (which is what I was hoping for when I bought it.) The D3d12 ShadowoftTomb Raider in-game benchmark, for instance, gets a ~95% scaling increase in framerates @ 3840x2160 over the RX-590 by itself, 95% of the eye-candy on (from a 33fps average 590 only to ~61fps 590/480 multi-GPU average--with peaks > 100 fps)--which I think is fantastic! It's an especially nice option for people who already own an RX-480 8GB and even a 580 8GB--I wouldn't recommend people go out and buy 2 590's at once, of course, but hundreds of thousands of people already own a 480/580 8GB, purchased a year or more ago, and for them the RX-590/4-580 8GB Multi-GPU option is a no brainer--if performance and value for the dollar motivates them. I'm very happy with the setup, atm. It's so easy to do, for instance, because now you can turn off Crossfire support directly in an individual game profile--or turn it on--no more rebooting and so on, and the requirement for matching MHz frequencies is a thing of the past (RX-590 @ 1.6GHz, RX-480 @ 1.305GHz), I was also glad to see! It's as near transparent as it can be--certainly an order of magnitude better than it was when I last tried Crossfire many years ago with twin 4850's! It would really be nice to see this information posted about new D3d12 game releases--I'm expecting to get the Div 2 any day now as the last game due me for the RX-590 purchase (Already have DMC5 and RE 2). Be nice to know if Div 2 was a multi-GPU title like Tomb-Raider, though! Someone has already said that frame-rate wasn't important in this game--which is fine--but I just like knowing these things, ya' know? It's just good information to have.
Thanks! ....good information and I agree
https://forums.guru3d.com/data/avatars/m/55/55855.jpg
gx-x:

do you have surface optimization ON in radeon settings Rich? Or did you put it to off to get rid of FP12 shaders and get back to FP16 as it is default (FP16 also default for nV cards)?
Ive got it on.
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
waltc3:

Since multi-GPU support (formerly Crossfire/SLI) has been officially integrated into D3d12, as opposed to being supported only by 3d IHV custom add-on driver packages outside the API in D3d11 & earlier--why are so-called "enthusiast" gaming sites ignoring it these days? If anything, it should be getting more attention as opposed to less, now that it's no longer the red-headed stepchild it used to be. When blockbuster games like Shadow of the Tomb Raider support it--and even have back-supported d3d12 multi-GPU support to Rise of the Tomb Raider--why is Guru 3d completely ignoring it? I'm certainly asking because I am surprised how well Crossfire and multi-GPU support work these days--had no idea how easy it is until I tried it with my RX-590/480 8GB setup back in December. Also, the AMD drivers all carry the old custom Crossfire profiles in every driver release. Works really well @ 3840x2160, btw (which is what I was hoping for when I bought it.) The D3d12 ShadowoftTomb Raider in-game benchmark, for instance, gets a ~95% scaling increase in framerates @ 3840x2160 over the RX-590 by itself, 95% of the eye-candy on (from a 33fps average 590 only to ~61fps 590/480 multi-GPU average--with peaks > 100 fps)--which I think is fantastic! It's an especially nice option for people who already own an RX-480 8GB and even a 580 8GB--I wouldn't recommend people go out and buy 2 590's at once, of course, but hundreds of thousands of people already own a 480/580 8GB, purchased a year or more ago, and for them the RX-590/4-580 8GB Multi-GPU option is a no brainer--if performance and value for the dollar motivates them. I'm very happy with the setup, atm. It's so easy to do, for instance, because now you can turn off Crossfire support directly in an individual game profile--or turn it on--no more rebooting and so on, and the requirement for matching MHz frequencies is a thing of the past (RX-590 @ 1.6GHz, RX-480 @ 1.305GHz), I was also glad to see! It's as near transparent as it can be--certainly an order of magnitude better than it was when I last tried Crossfire many years ago with twin 4850's!
mGPU has become even more of a step child. The thing was, as long as it was in the driver, the manufacturer could make sure it's supported properly, or at least help the devs with it. Some of the work (read: money) needed to make it work was put in by the GPU manufacturer (Nvidia's implementations SLI for instance, as well as CFX from AMD). But now, that work has to be done by the devs. And they are under much more preasure to get things working in a much tighter timeframe then ever before. Hence... they just don't do mGPU. It's as simple as that, it's a decision of the dev if he wants to press that kind of money into the adaption of mGPU services. And that money is not spent on mGPU but on marketing budgets, we know that the technical side of games is merely an annoyance when it comes to a game's revenue generated. The more money you put into development, the less profit they make. Hence, save lots of engineering hours for virtually the same sales numbers. Financially, it's a no-brainer to ignore mGPU. When it comes to your example with mGPU in SotT, the issue is, why should AMD / Nvidia want you to buy two lower end cards, when they really want you to buy one high end card? Where the margin for them is higher with one high end card? That doesn't make any sense. That's why, beginning with Pascal, I already hat the impression that Nvidia was deliberately limiting SLI of lower cards (as well as performance in SLI / scaling) to the advantage of buying something like a 2080TI instead of 2060SLI. Not to mention that they started with simply removing the connectors in the first place... that's a deliberate move on Nvidia's side. On AMD's side they're trying to help keeping CFX alive (and they have an advantage working only via the PCIe BUS rather than a cable, but that's just my opinion), but you see it also on the deline there because AMD simply does not have the money of hiring a lot of engineers to send them out to devs to help them implement mGPU in their games. Devs of games like AotS only can do this because they have enough money to work on it, and make it an example for benchmarks (2016 or so everybody was crazy about AotS benchmark, nobody played the game at all). Devs of benchmarks themselves have time to do this, but they're not even interested in it! All in all, with dx12 mGPU they took any chances we had of AMD / Nvidia doing the work, pushed it onto a side that has even less freedom to develop such "technological infrastructure". Devs usually only program for a single GPU anyway since consoles only have one... with the death of dx11 we will see the death of mGPU, I tell you. It's sad, but it never was what I personally wished dx12 mGPU should have been in the first place (no pooled VRAM, not adressable as one GPU for performance purposes etc.)
data/avatar/default/avatar21.webp
I preordered the game today, it seems is doing well, i just fear the day one patch, but far from becoming a mess like it is anthem, at least i hope so...
https://forums.guru3d.com/data/avatars/m/258/258688.jpg
fantaskarsef:

mGPU has become even more of a step child. The thing was, as long as it was in the driver, the manufacturer could make sure it's supported properly, or at least help the devs with it. Some of the work (read: money) needed to make it work was put in by the GPU manufacturer (Nvidia's implementations SLI for instance, as well as CFX from AMD). But now, that work has to be done by the devs. And they are under much more preasure to get things working in a much tighter timeframe then ever before. Hence... they just don't do mGPU. It's as simple as that, it's a decision of the dev if he wants to press that kind of money into the adaption of mGPU services. And that money is not spent on mGPU but on marketing budgets, we know that the technical side of games is merely an annoyance when it comes to a game's revenue generated. The more money you put into development, the less profit they make. Hence, save lots of engineering hours for virtually the same sales numbers. Financially, it's a no-brainer to ignore mGPU. When it comes to your example with mGPU in SotT, the issue is, why should AMD / Nvidia want you to buy two lower end cards, when they really want you to buy one high end card? Where the margin for them is higher with one high end card? That doesn't make any sense. That's why, beginning with Pascal, I already hat the impression that Nvidia was deliberately limiting SLI of lower cards (as well as performance in SLI / scaling) to the advantage of buying something like a 2080TI instead of 2060SLI. Not to mention that they started with simply removing the connectors in the first place... that's a deliberate move on Nvidia's side. On AMD's side they're trying to help keeping CFX alive (and they have an advantage working only via the PCIe BUS rather than a cable, but that's just my opinion), but you see it also on the deline there because AMD simply does not have the money of hiring a lot of engineers to send them out to devs to help them implement mGPU in their games. Devs of games like AotS only can do this because they have enough money to work on it, and make it an example for benchmarks (2016 or so everybody was crazy about AotS benchmark, nobody played the game at all). Devs of benchmarks themselves have time to do this, but they're not even interested in it! All in all, with dx12 mGPU they took any chances we had of AMD / Nvidia doing the work, pushed it onto a side that has even less freedom to develop such "technological infrastructure". Devs usually only program for a single GPU anyway since consoles only have one... with the death of dx11 we will see the death of mGPU, I tell you. It's sad, but it never was what I personally wished dx12 mGPU should have been in the first place (no pooled VRAM, not adressable as one GPU for performance purposes etc.)
Multi-GPU support in SofTR is *excellent*, btw. Have 0 problems with it. Very impressive! I don't think you are realizing what moving it into the D3d API means--multi-GPU support should always have been in the API because who better to implement it in their game engines than the developers who build the engines? Bolting it on in the GPU drivers by AMD/nVidia after the fact, was always a kludge and a hack--which sometimes worked and sometimes didn't. Workarounds were implemented to get the OS to see two GPUs as one, for instance. That's history. Now all of that is in the OS in D3d/DX--much better! Utilizing multi-GPU in their engines is far, far more trivial for developers than RTX game support, for instance...I should think that is obvious! And it has a far bigger impact on gameplay than does RTX, etc. One reason we aren't seeing that much of it right now is because lots of developers are still trading off their older engines--to which a few D3d12 features have been bolted on for marketing purposes, more or less. I believe that newer game engines will see multi-GPU support go widespread--but we'll see. Well, it's obvious that nVidia doesn't want anyone buying 1060's to do SLI, or the cheaper RTX GPUS, isn't it? AMD supports multi-GPU with RX-480/580/590, so I'd say that is a fair indicator that AMD wants to sell you another card--if not the most expensive of the lot, then a less expensive card to pair with the AMD card you already own. Look, if developers don't have the money to support multi-GPU in their engines then they surely won't have the money to support RTX, eh? As I mentioned, RTX is far more onerous and complex to implement, and the benefits are paltry--questionable at best. Multi-GPU support, on the other hand, gives developers immediate and tangible benefits in their games because it greatly ratchets up the framerates for their customers. If they must choose between the two, multi-GPU support will win every time, imo. AMD's X-fire utilizing the PCIe 3.x bus works extremely well--and there's nothing stopping nVidia from doing the same. But as usual, nVidia wants to stick it to its customers and artificially manipulate them into paying more for less, basically. Everything is "proprietary" and comes at a cost when it comes to nVidia--look at how pathetic nVidia's support for Freesync is, currently, when AMD simply gives it to them! Look at the non-support for SLI--which nVidia also insists on licensing--in the lower end of even its freaky-expensive RTX GPUs! I'll be honest and say that I, personally, could care less what nVidia wants me to do because I'm not doing it...;) Ever.
https://forums.guru3d.com/data/avatars/m/258/258688.jpg
kilyan:

I preordered the game today, it seems is doing well, i just fear the day one patch, but far from becoming a mess like it is anthem, at least i hope so...
Right now, I'm in the last 10GB's of the pre-load from UbiSoft--got my third free game...;) Guess they will open it up tomorrow!
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
waltc3:

Multi-GPU support in SofTR is *excellent*, btw. Have 0 problems with it. Very impressive! I don't think you are realizing what moving it into the D3d API means--multi-GPU support should always have been in the API because who better to implement it in their game engines than the developers who build the engines? Bolting it on in the GPU drivers by AMD/nVidia after the fact, was always a kludge and a hack--which sometimes worked and sometimes didn't. Workarounds were implemented to get the OS to see two GPUs as one, for instance. That's history. Now all of that is in the OS in D3d/DX--much better! Utilizing multi-GPU in their engines is far, far more trivial for developers than RTX game support, for instance...I should think that is obvious! And it has a far bigger impact on gameplay than does RTX, etc. One reason we aren't seeing that much of it right now is because lots of developers are still trading off their older engines--to which a few D3d12 features have been bolted on for marketing purposes, more or less. I believe that newer game engines will see multi-GPU support go widespread--but we'll see. Well, it's obvious that nVidia doesn't want anyone buying 1060's to do SLI, or the cheaper RTX GPUS, isn't it? AMD supports multi-GPU with RX-480/580/590, so I'd say that is a fair indicator that AMD wants to sell you another card--if not the most expensive of the lot, then a less expensive card to pair with the AMD card you already own. Look, if developers don't have the money to support multi-GPU in their engines then they surely won't have the money to support RTX, eh? As I mentioned, RTX is far more onerous and complex to implement, and the benefits are paltry--questionable at best. Multi-GPU support, on the other hand, gives developers immediate and tangible benefits in their games because it greatly ratchets up the framerates for their customers. If they must choose between the two, multi-GPU support will win every time, imo. AMD's X-fire utilizing the PCIe 3.x bus works extremely well--and there's nothing stopping nVidia from doing the same. But as usual, nVidia wants to stick it to its customers and artificially manipulate them into paying more for less, basically. Everything is "proprietary" and comes at a cost when it comes to nVidia--look at how pathetic nVidia's support for Freesync is, currently, when AMD simply gives it to them! Look at the non-support for SLI--which nVidia also insists on licensing--in the lower end of even its freaky-expensive RTX GPUs! I'll be honest and say that I, personally, could care less what nVidia wants me to do because I'm not doing it...;) Ever.
Below the long version, here the short one: I agree with you on that mGPU should be in the API not via hacks! Much better and actually the way it's supposed to be, when you don't have to adapt to every engine, but only to one API actually. mGPU and AMD/Nvidia, I think the story here is that AMD profits more from high sales in a low segment, while Nvidia profits more from high sales in the high segment, so they try to make you buy one big card opposed to two smaller ones. Not sure it's something to be upset about as long as you get reasonably priced card(s) to run your display, which is not yet happening with either vendor if you're looking at 4K maxed. But yes, Nvidia is influencing it's customers more in that regard. But it's a business decision, not a technical one. What you want to do or not to do, what we think Nvidia wants us to do or not, is actually not the point for me personally... I'm just unhappy about many situations where you can't even buy what you want, no matter how much money you'd like to spend. Where the games don't implement the technological standards we want, even if we'd maybe be willing to pay more / wait longer for a game that's properly working with all tech gimmicks we PC gamers like! And just to have it said, paying more for less... that's your opinion. I wanted a big, fast GPU that would drive my 1440 144Hz display with all the eye candy as fast as it gets, a single one to work around any CFX / SLI issues I've encountered in the past (micro stutter, profile search, lacking scaling, game patches breaking SLI compatability regularly with Battlefield in particular...). And there, two years ago, I only had one choice, and that was Nvidia. So am I upset at AMD because they don't offer what I want? I could say I don't want to do what AMD wants me to, buy two low end cards to CFX them so they sell more, with increased issues with power drain, heat, and not to forget the amount of money I'd sink into a second waterblock, stronger pump, bigger rads etc. etc. In the end, I would have paid as much or more money for the performance I have right now if I'd have gone with AMD for my specific scenario. Do you see me ranting about AMD because of that? 😉 I agree with you fellow guru on many things, so don't feel offended by prople or me in particular having a different opinion. And right now I say, mGPUs biggest problem is DX12, m$, and devs, not AMD or Nvidia 😉 [SPOILER="long version"]Oh I agree with you, the technical side of things is indeed like it's supposed to be, inside the API. That's not at all the issue I have with it. While I do agree that mGPU should be easier to implement than RTX / DXR, that also is not the question here. The technical abilites, I trust pretty much every dev to be able to implement DX12 with both mGPU and DXR in their games / engines. 2nd paragraph, you're all right because you're saying the same as me, Nvidia doesn't want us to go low end mGPU. They want to sell a high end GPU with a better margin than two low end ones... it's a business decision. The usual gamer does not buy two cards anyway... 95% or more of PC gamers have a single card and that's it. If I'm wrong with the 95%, I'm fairly sure it's still way, way more people running one GPU than two or more. With AMD, I'm curious on why they don't limit your options, I could imagine, their ancient 580 has a better margin than a Vega7, for instance. An easier to fabricate chip as well. AMD actually profits from selling you two low end GPUs, Nvidia does not. And that's probably the reason why CFX is not crippled, SLI on the other hand is. It's only about the money... even for AMD, since they're not exactly a charity either. But you certainly haven't seen the real issue here: it is not about what devs could do. It's about what they do for real. And that's dependent on money and time. Everything we're talking here costs money to implement (via working hours of engineers), and time to actually do it, test it to see if it works etc. The implementation of mGPU does cost money for a dev because it has to pay for the hours of coding. Once it's done, they save lots of work for every other game, true, but in the first step, it costs money. Money that devs usually don't have because the games always need to generate more and more income for less and less production costs they want to put into it. Also, devs usually don't have the time to invest half a year or more into working out their mGPU implementation, games have to release to date X or they're done for, we all see and know that postponing a game usually means missing sales to some extent, or dropping into another game's release window along the way. And look at what sells games: Effects (like DXR could have if it worked properly right now and with decent performance) and to some extent, gameplay. Besides us few enthusiasts, nobody cares about mGPU. Console players don't care about mGPU, they would care about DXR for instance. So they neither have time nor money to even work on dx12 engines, like you said, let alone implementing something on top of that, be it mGPU or DXR. And now here comes Nvidia's part: Why do they offer the gameworks? Because they know devs don't want to put the money/time into producing such software adaptions. Here, they push in engineering to fill the gap devs don't want to / can't, in the hopes of selling more GPUs, obviously. That's why Nvidia offers them, to get them into games, and that's supposed to help sell GPUs. Without Nvidia's engineers, we still would not have seen DXR even tested (and I'm saying tested on purpose), for good or bad. So what happens when a dev does not have the time / money to update their engine to DX12 / implement mGPU / implement new effects like DXR because they need to finish game X until date Y at a fixed cost? NOTHING. That's why only a handful of devs were able to even implement DX12 up until now, in a sense to make it even worth mentioning. That's why DXR isn't in every game. That's why mGPU is on the decline (at least for the moment, further on this below). And that's where Nvidia comes into play, pumping it's abundant cash into stuff like RTX and offering it to devs, with BF5 as an example. EA is interested because effects can help sell games (and Frostbyte's their engine), and that's how DICE had the questionable honor to implement DX12 and RTX into BF5. It's a deal made that is aimed at improving the situation of EA / DICE (sell more games) and Nvidia (sell more GPUs). That is absolutely normale because, in reality, if EA/DICE would have the time and money to let their own engineers program it, they could do it, but it's a business decision if they want to invest a huge ammount of money and time into binding engineers to implement mGPU / DXR when in reality a couple of thousand people benefit from it, opposed to millions of sales on games who don't. From a game dev's standing point, on first sight, it's stupid to invest into mGPU unless you get a major advantage out of it... which they usually don't. DXR could give devs an advantage because people, customers, talk about the best looking games (DXR), not the best CFX / SLI capable games (mGPU). DX12 in itself is on middle ground here, it could potentially help a game's sales, appealing to a wider base of potential buyers (with lower end hardware instead of only middle and high end hardware), yet we haven't seen proper DX12 impelementations for many games. How come? Because it takes quite a lot of time and money to port a game's engine to a new API and profit from it. We have seen DX12 modes which performed worse than DX11, because it wasn't done properly, not enough time / money pushed into it. So what's better, have a game cost $500k more to produce because you want to do DX12 and don't do it properly, leaving you with a broken DX12 mode that never works properly, or would you save that kind of money right away because you know, everybody buying your game will buy it the same even if it's only DX11, meaning zero sales lost and zero costs during production? When in reality, what they should do (and here I think we agree), put in not $500k but $2m to make it work with every thing available, proper mGPU support and when they want, DXR as well. because it's an investment in the future of their own engines... I perfectly agree with you, they should put in time and money to get DX12 working fully (it's long overdue, DX12 was the reason I bought an expensive hexacore CPU when people were still praising their 4C/8T Devil's Canyon CPUs), they should put the money into mGPU (since for the first time it could really mean good scaling without the hacks you mentioned), they should put the money into effects like DXR (because, we haven't seen new effects since DX11's release in 10/11 years). And we all know that GPUs are somewhat behind display technology... we know that 8K computer monitors are a thing more or less "soon", but we haven't even gotten affordable 4K GPUs. That's where I am hoping (as mentioned above), that mGPU will be back soon, once DX12 finds it's way into every bigger engine, because at some point we can't drive our displays with single GPUs anymore (pretty much like right now, 4K maxed is pretty much 50/50 even with a 2080TI under some circumstances). Maybe in the future, with AMD's chiplet design, we will see another form of mGPU (along with consoles evolving as well), which I wouldn't mind at all. Before the 1080TI I had 980SLI, and while it was good for me, and usually without a hassle, it has seriously gotten worse with SLI over the last years... I can't say much about CFX, since I haven't been using that since 2011 (5770 CFX), and back then it was horrible and inferior to SLI in both performance and user friendlyness (no seperate profile hunting and a way to fix it yourself via Nvidia Inspector with SLI). And the lacking adoption of DX12 / mGPU / DXR, this is actually not about what Nvidia wants you to do at all. It's about what we as gamers and enthusiasts want the devs to do... they're lacking in the first place. Don't blame Nvidia for lacking adoption of DX11, mGPU or DXR, when it's the devs that have to do all of this since m$ released DX12. Nvidia does not sit down and program malware to cripple DX12, they don't hold guns to dev's heads commanding them to destroy any code they have for mGPU or conspire to keep new visual effects away from games... this time, with all of Nvidia's faults and wrong doings, DX12 is not their sinking ship, it's m$'s, in my opinion.[/SPOILER]