Crossfire is no longer a significant focus says AMD CEO Lisa Su

Published by

Click here to post a comment for Crossfire is no longer a significant focus says AMD CEO Lisa Su on our message forum
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
chiplets have some interesting hurdles to overcome to be high performant in dedicated graphics cards.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
Astyanax:

chiplets have some interesting hurdles to overcome to be high performant in dedicated graphics cards.
Yes, but the price... Cost efficiency is strong motivation. Take current Zen2 chiplets and range of products same dies are used on. Imagine AMD tapes-out chiplet with 10 WGPs and can scale it as much as interposer, I/O die and power efficiency allows. No more reason to keep old product stack around for few generations. They would be able to replace everything. And cost of making top GPUs (cards) would not have that extra of tapeout which is now main blocker as lower sales makes it harder to get return of investment. Everything would be quite economical. And that would mean lower prices for us. And faster iterations of new generations. Because designing each chip and working on separate tapeout => validation is time consuming too. (And I doubt that I wrote even half of benefits.)
data/avatar/default/avatar11.webp
Both SLi and CF are using AFR to balance the workload and that technology has become more and more at odds with the advancement of the game engines. In the past it was much more straightforward, where the game will render everything into a single memory target (frame-buffer) and start again fresh with the next frame. Now games have scattered the different stages of the image composition in multiple off-screen buffers and on top of that the growing use of sampling data from previous frames throws more wrenches in the delicate driver-based management of the multiple video memory pools. Obviously, Nvidia and AMD are not willing to spend more resources to do the legwork for the game devs, since both DX12 and Vulkan APIs already expose direct multi-GPU controls at application level.
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
such an interposer would have to run faster than nvidia's nvlink fabric
https://forums.guru3d.com/data/avatars/m/265/265660.jpg
Good riddance. Too many issues with it.
data/avatar/default/avatar13.webp
Not a big deal, in any case D3D12 and Vulkan are able to manage multiple node adapters and multiple independent adapters. With PCI-E 3.0 and 4.0, such "optimized" share bandwidth channels begin to become less important then ever (AMD removed the Crossfire bridge many years ago..)..
data/avatar/default/avatar26.webp
To be honest I was never a fan of running two GPU cards. I can understand if new games are demanding like another Crysis game but unfortunately game developers penny pinch and take baby steps just to be safe.
https://forums.guru3d.com/data/avatars/m/273/273838.jpg
Rememeber the "Chiplet design GPUs" discussion? The engineers said that the major problem with a chiplet GPU is to make the system recognise it as a single GPU. We should have guessed back then that multi GPU setups have no future.
https://forums.guru3d.com/data/avatars/m/220/220188.jpg
it died thanks to worthless software makers like EA, ubisoft etc, multi gpu on the low-mid end made sense for people because they could almost double performance later on by adding another now cheaper card, and it made sense for the high end since the only way you go higher than top card is to double it back in hd5000 days i was very happy with it, and a year later when one card died i could still play, so hey multigpu is even good for redundancy at least it seems nvidia realizes this and wants that sweet cash or whatever, what if you want 4k ultrawide? a single 1200$ card will barely do 40fps, but 2 might get you 70fps or so you could argue it doesnt make sense for low end since you are wasting a lot of power and dealing with extra complexity vs a newer card, but thats the case today, not when multigpu was in its heyday
https://forums.guru3d.com/data/avatars/m/266/266726.jpg
makes sense, there is no clear solution to the problems caused by multicard setups, other than specific optimization by the developer or vendor on a per engine/game basis. Any future multi-gpu implementation will likely be on a single card anyway, and will need to be invisible to the user /developers, to be worth adopting.
data/avatar/default/avatar19.webp
multi-gpu used to be a thing where you buy 2 cheaper video cards to get the performance of a higher card at a slight discount, and an upgrade path of sorts if you didn't want to sell off your old hardware (buy $120 gpu and buy another $120 gpu later is easier on the wallet than buying $120 gpu, then a $250 gpu later and selling the first GPU at a discount because now it's used and probably older tech) It's somehow turned into an abomination where only the highest end SKU's support it and it's only purpose is to get benchmark scores/world records and no practical purpose other than a money sink that only benefits the parent company. Explicit mode multi-GPU rendering is still a thing and you don't need to rely on proprietary technology (crossfire/sli/nvlink) but the game or app needs to be programmed to support it... well, explicitly.
data/avatar/default/avatar13.webp
karma777police:

A reason for this is broken Windows 10 and crap called DX12. That's why I am on Windows 7 and DX11 where SLI works fine
so true, if skyrim among other games ran much better on win7 for me, alas we are now forced by hardware to go to win10 🙁 people are happy they kind of say game over because I still read many people wanting to do sli on win10, please don't I had sli 580-680-780-980-1080 the lack of support and micro-stuttering have destroyed sli, forget it exists or you will like me one day, disable sli for some reason then realize your games with 40% less fps look smoother >< frametimes etc.. weren't so talked about back then but I'm pretty sure mine where really bad, "140fps" according to fraps BF4 looked like a 5fps diaporama the size and heat generated by the current videocards doesnt help either most of nvidias are monsters
https://forums.guru3d.com/data/avatars/m/56/56004.jpg
One of my rigs is still running 2x VEGA64's, pretty happy with it as when CF works, the result is pretty damn good. So, regardless, I'm keeping this setup for my other gaming desktop... https://i.imgur.com/aizOzzo.jpg As for my recently built 3900X rig, due to the placement of the PCIe X16 slots, when I tried CF, the primary (top) card was just millimeters above the 2nd card. Running them in games resulted in primary card hitting thermal threshold and I had to stop the game. While I'm content with just a PC VEGA64 Red Devil in it for now, I'm waiting anxiously for NAVI 21 and 23.....a single powerful GPU's the ticket for me.
https://forums.guru3d.com/data/avatars/m/196/196284.jpg
I never tried Crossfire.... Tried SLI twice. Once with a pair of 8600GT's and again later with GTX660's..... Don't recall any real issues either time.
https://forums.guru3d.com/data/avatars/m/231/231016.jpg
With CF I can pull 100fps+ in Wolfenstein II The New Colossus with my Dual Vega 64's running at 4K Ultra. Just shows when implemented properly it works.
https://forums.guru3d.com/data/avatars/m/223/223196.jpg
This is why I'm holding off upgrading indefinitely right now. I've been using SLI since 2008, went tri-SLI in 2015. I have and had zero issues with it ever. My tri 980ti are so fast that even two 1080 would have been just on par. Only with two RTX would I see an actual improvement, but that is way too much money and only worth a consideration when/if my 980s actually break. Companies are putting out ever bigger monitors with ever bigger refresh rates. At the same time Nvidia and AMD are actively crippling our ability to actually drive these things by canning SLI and Crossfire. No goddamn single card can drive such a monitor beast, and none will be able for 1-2 generations of cards further down the line.
data/avatar/default/avatar31.webp
had once 8800GT ran battlefield 3 at 1280x1024, old 4:3 ratio back then at low medium hovered around 40-60fps
https://forums.guru3d.com/data/avatars/m/94/94245.jpg
mikeysg:

One of my rigs is still running 2x VEGA64's, pretty happy with it as when CF works, the result is pretty damn good. So, regardless, I'm keeping this setup for my other gaming desktop... https://i.imgur.com/aizOzzo.jpg As for my recently built 3900X rig, due to the placement of the PCIe X16 slots, when I tried CF, the primary (top) card was just millimeters above the 2nd card. Running them in games resulted in primary card hitting thermal threshold and I had to stop the game. While I'm content with just a PC VEGA64 Red Devil in it for now, I'm waiting anxiously for NAVI 21 and 23.....a single powerful GPU's the ticket for me.
I applaud your bravery for using Gigabyte Vega's in CF. 😀
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
If only they'd offer a single performing GPU that performs good enough for high res high refresh high settings gameplay.
https://forums.guru3d.com/data/avatars/m/220/220214.jpg
Remember the days when there was actually two GPUs on the one card? I had two of those AMD cards, Radeon 5970 and 7990 .... in those days the game engines were simpler (as explained by Fellix above) and could do AFR and other techniques, now the game engines are not compatible with splitting across multiple GPUs. Also as others said, the increase in framerate in those days of using two cards was completed invalidated by stuttering which made it look like less framerate than just using a single card. And then there was the games where MSI Afterburner would show that one GPU was at 100% usage and the other was sitting at 0%. I'm sure using "chiplets" to scale GPUs will cause similar game engine problems as SLI did. But maybe the massive bandwidth increase of PCI-E 4.0 will help (or the interconnnect between GPUs similar to the 3rd Gen Ryzen chiplets). A lot of it is economics too... NVidia would much prefer people had to buy its more expensive GPUs than have any option of doubling up a much cheaper or second-hand card later on to get same speed.