Nvidia and AMD Cross Multi GPU Tested In DirectX 12

Published by

Click here to post a comment for Nvidia and AMD Cross Multi GPU Tested In DirectX 12 on our message forum
https://forums.guru3d.com/data/avatars/m/169/169422.jpg
I was looking forward to this... I thought my HD4600 iGPU, which comes built into my 4790K, will finally be useful alongside my HD7970 under Windows 10. Certainly the first time Microsoft announced this feature back last year I was excited about it. But I don't believe the HD4600 is DX12 ready. There are still a few interesting questions to be asked: How will this work with a gamework title? Also will buying a freesync monitor and a g-sync monitor not matter anymore? Can one card be used to do anti-aliasing (for example at 24X) and the other card be used to render the game? There are a lot of interesting prospects, but I doubt any of them will be implemented in the near future.
data/avatar/default/avatar08.webp
I'm more excited running my iGPU together with my dedicated gpu. imagine your iGPU doing some of the more basic tasks in a game while your dedicated gpu does all the hard stuff. Kinda what AMD said about their APU that can be crossfired with any dx12 capable GPU
data/avatar/default/avatar10.webp
AMD is starting to play its card. notice how the benchmarks are the best with the AMD Fury X card as primary. I would imagine the primary card would either be running freesync or g-sync if your primary is AMD or Nvidia respectively. Kinda sweet I will be able to utlize old cards now in Multi GPU going forward to handle post processing stuff.
data/avatar/default/avatar17.webp
Load balancing or we choose each card from dropdownlist to bind them to specific tasks. R7-240 --> renders %10 of tiles and computes physics(except smoke) and artificial intelligence. (30W) hd7870 --> renders %90 of tiles and computes smoke and 8x anti aliasing and some ray traced surfaces and crowd pathfinding. (190W) fx8150 --> I dont want to use this for anything, it heats too much ... . 250 W
https://forums.guru3d.com/data/avatars/m/124/124168.jpg
980ti bit faster than a fury x 1440p, all those nonsense threads in the amd section about maxwell and dx12. All it took was a driver update. Remember 1 guy who said do not expect nvidia to gain dx 12 performance with drivers. Good stuff.
https://forums.guru3d.com/data/avatars/m/186/186805.jpg
AMD is starting to play its card. notice how the benchmarks are the best with the AMD Fury X card as primary. I would imagine the primary card would either be running freesync or g-sync if your primary is AMD or Nvidia respectively. Kinda sweet I will be able to utlize old cards now in Multi GPU going forward to handle post processing stuff.
AMD have no cards to play. Neither does Nvidia. The results show what you said, but the difference is BARELY even noticeable. The difference is roughly around 0.1-2fps. You would never ever notice that.
https://forums.guru3d.com/data/avatars/m/241/241896.jpg
I'm fairly sure that Nvidia will do something to their drivers to stop this if it's at all possible, if they can't well very interesting times ahead for sure , this opens up a hole load of possibilities
data/avatar/default/avatar18.webp
Load balancing or we choose each card from dropdownlist to bind them to specific tasks. R7-240 --> renders %10 of tiles and computes physics(except smoke) and artificial intelligence. (30W) hd7870 --> renders %90 of tiles and computes smoke and 8x anti aliasing and some ray traced surfaces. (190W) fx8150 --> I dont want to use this for anything, it heats too much ... . 250 W
I know it was always proposed as a thing that would be possible, but I think there are still 0 titles/engines that are using DirectCompute (or any GPU) based AI.
data/avatar/default/avatar19.webp
I know it was always proposed as a thing that would be possible, but I think there are still 0 titles/engines that are using DirectCompute (or any GPU) based AI.
If it cant do A.I. then it should be able to do some crowd behaviour atleast so 1000s of units find their way quicker. I saw someone doing this with a titan.
https://forums.guru3d.com/data/avatars/m/88/88194.jpg
I was looking forward to this... I thought my HD4600 iGPU, which comes built into my 4790K, will finally be useful alongside my HD7970 under Windows 10. Certainly the first time Microsoft announced this feature back last year I was excited about it. But I don't believe the HD4600 is DX12 ready.
No DX12 for How will this work with a gamework title? Most of the GameWorks stuff runs on AMD cards. Some effect run fine (lile HBAO+) and some are notably slower (like the special shadowing or hair rendering). This will work in the same way. The code which is specially tailored for Maxwell will run slower on GCN. And AFR will limit the performance to the slowest of the pack (so in this case where GCN is probably slower, you virtually end up with N times the GCN card --- or vice-versa with different code).
Also will buying a freesync monitor and a g-sync monitor not matter anymore?
This depends on the card which handles the display (basically: wherever you physically plug it).
Can one card be used to do anti-aliasing (for example at 24X) and the other card be used to render the game?
I don't think that's possible, regardless of the API and GPU(s) used. Besides, even if you could dedicate some specific jobs to some specific GPU(s), why would you do that? Why would you intentionally create a situation where the even utilization of all the cards (including 100% of all) at the same time (and all the time) is reasonably impossible to begin with? You would only cripple your total resource pool with static allocation like that.
There are a lot of interesting prospects, but I doubt any of them will be implemented in the near future.
It is, according to this benchmark, implemented already (everything what is reasonable and were planned). I don't see what you mean (unless you have some unreasonable wishes - like the dedicated anti-aliasing above).
data/avatar/default/avatar14.webp
I'm fairly sure that Nvidia will do something to their drivers to stop this if it's at all possible, if they can't well very interesting times ahead for sure , this opens up a hole load of possibilities
it cant.. the director of oxidie said that mda can not be locked out with drivers. it is engine specific the gpu has nothing to say about it
data/avatar/default/avatar16.webp
it cant.. the director of oxidie said that mda can not be locked out with drivers. it is engine specific the gpu has nothing to say about it
I still wouldn't put it past nVidia to sneak in some code that defaults to a slow codepath when an AMD video device is detected in the system. We used to be able to run PhysX on an nVidia card with an AMD as the primary display, but they managed to screw that up.
https://forums.guru3d.com/data/avatars/m/248/248902.jpg
No DX12 for HD4600 is intel GT2 which supports dx12.
data/avatar/default/avatar24.webp
HD4600 is intel GT2 which supports dx12.
Are you sure? My control panel says directx 11.
https://forums.guru3d.com/data/avatars/m/124/124168.jpg
Are you sure? My control panel says directx 11.
Windows 10=Direct x 12.
https://forums.guru3d.com/data/avatars/m/196/196284.jpg
980ti bit faster than a fury x 1440p, all those nonsense threads in the amd section about maxwell and dx12. All it took was a driver update. Remember 1 guy who said do not expect nvidia to gain dx 12 performance with drivers. Good stuff.
I'd venture to say that something was missing from NVidia's drivers in the initial Ashes of Singularity comparison, which is why we see such a big improvement with a newer driver. Obviously there will be "optimizations" with each GPU architecture release that will yield some improvements. Once NVidia gets DX12 figured out, we shouldn't see the huge performance gains NVidia has managed to pull off with DX11 though. At least not based on my understanding of how DX12 works.
https://forums.guru3d.com/data/avatars/m/124/124168.jpg
I still wouldn't put it past nVidia to sneak in some code that defaults to a slow codepath when an AMD video device is detected in the system. We used to be able to run PhysX on an nVidia card with an AMD as the primary display, but they managed to screw that up.
I doubt any 980ti owners are going to buy a furyx for 1 game, and the same with a fury x owner buying a 980ti for 1 game. This is basically a show case, nothing more. 99.9% if not 100% of people will run sli and xfire and have support for a boatload of games.
https://forums.guru3d.com/data/avatars/m/196/196284.jpg
Yeah.... I have my HD7950 sitting on the shelf and I'd still pick up another GTX970 before I'd even consider throwing the 7950 back in to run along side my 970...
https://forums.guru3d.com/data/avatars/m/258/258688.jpg
I was looking forward to this... I thought my HD4600 iGPU, which comes built into my 4790K, will finally be useful alongside my HD7970 under Windows 10. Certainly the first time Microsoft announced this feature back last year I was excited about it. But I don't believe the HD4600 is DX12 ready. There are still a few interesting questions to be asked:
Definitely would not hold my breath waiting on that...Highly unlikely...;) HD4600 is a performance dog next to your 7970 and its only likely effect would be to slow everything way down--if it would even be possible--I doubt anyone could get it working in the first place even if they wanted to...the disparity between the gpus is just too great. Ideally, of course, you'd want two (or more) identical cards--but keep in mind that the really neat features in D3d12 have to (a) be supported in hardware and (b) supported by the game itself--especially these features, or for instance the feature that would give you a giant single GPU with >~5,000 stream processors (2x2500 sps)and ~16GBs (2x8GBs) of ram... 🤓 etc., etc. I'm thinking that these features will be included in several mainstream game engines (like Valve's or Epic's, etc.) so that a developer using these engines for his game would have all the grunt work relative to the nice D3d12 feature support already done for him. I mean, really, a GPU from nVidia/Intel paired with a GPU from AMD wouldn't be a very appealing concept for a number of reasons...right? At least with two like GPUs you have a fighting chance to do Crossfire or SLI in d3d11 and < games. With a mixed pair, no can do. It's a great novelty but I suspect that other configurations will be far more appealing to the majority of people. Makes for sensational articles, though, even if few people will seriously ever do it.
https://forums.guru3d.com/data/avatars/m/145/145154.jpg
If nothing else, in a few years, this will be pretty cool for those skeleton builds from old PC parts. Video cards, especially high end ones, become obsolete very quickly. They require a lot of power and once they age past a certain point, it's hard to find a buyer/home for them since most who'd be interested, with a rig capable of running a high end video card are already using same/newer tech. Having the ability to use two video cards from varying manufacturers could breathe some new life into these old, obsolete cards. That said, I always use a single video card myself.