Rumor: Radeon RX 7900 XT would be up to 3 times faster than the Radeon RX 6900 XT
There are a number of rumors on AMD's next-gen Radeon RX 7900 XT graphics cards. Take it as such, rumors. RDNA3 would offer a significant performance jump thanks to the implementation of a chiplet design like the Ryzen.
It is indicated that the next AMD flagship would be the first model of the Radeon RX 7000 series to adopt such a design, which would consist of two GPU's holding 80 CUs each (5120 Shader Processors), these would communicate over an I/O die. pretty much what you're seeing with Ryzen. The GPU would be Navi 31 and could be 2.5 times faster than Navi 21 (RX 6900 XT). The leaker, Yuko Yoshida, mentioned that performance improvement would be even greater, based on RDNA3 and 5nm fabbing.
It's speculation of the highest degree though. Personally, I don't see multi-chip design working well for graphics cards due to reasons of latency, cache, and complexity. Then again, we never would have thought that Ryzen would work out so well either. For now, this is a lot of gossip and hearsay though. As it stands the way I see it, now next-gen GPUs still aren't expected till the end of 2022 maybe even 2023.
Rumor: Ryzen 7000 processors all get integrated graphics - 04/06/2021 08:49 AM
The rumor mill the past years has been in 6th gear, and apparently, we're now already talking Ryzen series 7000/ ZEN4. It's a fresh rumor though. See One difference between Intel and AMD is that all...
Rumor: Radeon RX 6000 Series as fast as a standard RTX 2080 Ti? - 09/14/2020 04:10 PM
Alleged performance of AMD's next-generation Radeon RX 6000 series graphics cards based on RDNA 2 GPU architecture allegedly has leaked. The performance is however based on the outdated AOTS (Ashes o...
Rumor: Radeon RX Vega cards Look to be Insanely good crypto-currency miners - 08/04/2017 09:52 AM
Gamers beware, rumors now indicate that the Radeon RX Vega cards might be insanely good crypto-currency miners. And that will be a problem as miners would purchase them for above average prices, which...
Senior Member
Posts: 7208
Joined: 2020-08-03
It is not early. When technology is ready and not used, it is called: "Late to the Game."
Issue of this rumor is quite different. It is in statement of having 2x 80CUs.
Considering AMD's patent where only IC gets to sit on interconnect chiplet. (Possibly some IO / Multimedia-engine. As patent is about caching coherency between chiplets which enables them to behave as they are just one GPU.)
There will not be huge saving of transistor count for each 80CU chiplet. This means space based improvement of yields will be minimal. And most of it will come from manufacturing technology itself. So we can expect to have minimal or no improvement to price tag per die.
And as result, manufacturing card with one GPU Chiplet and one Caching chiplet will cost about same as current 6800(XT)/6900 XT. Perform like 30% better. And delivers minimal improvement improvement to performance per $ for end user.
So even if that 2 GPU chiplet + caching chiplet card delivered 2.5~3x performance of 6900 XT, it will likely cost proportionally more too.
Yet having chiplet with 40CUs could make for 150~180mm^2 dies. Improved yields and reduced final price tags.
Smaller chiplets would enable ~1.3 times 6900 XT performance at price of 6800 XT which would be good improvement to performance per $.
2 versions maybe ?
Senior Member
Posts: 2869
Joined: 2016-08-01
This rumor makes me wonder . Was there the infinity cache to help only with the small bus or was preparing the ground for mcm gpus ? Will be interesting to see in the next year /years . Even if availability is going to be yet non existent it will still be interesting.
A 2x80 cu + interconnect chip will be pricy but those things are kinda doomed to be usable to almost .... If it does not explode it can be used .... Even with only 32 working cus a 32 + 32 configuration in theory should pack more power than the 6700 ... Branding it as a7500 or 7600 .... Anyway too early to do anything other than wild theories about it. My logic here is since zen chiplets 1 design cover from their very lowend all the way up to the most super unicorn sayan epyc ... They might be aiming to do the same with gpus.
Senior Member
Posts: 7163
Joined: 2012-11-10
lol I don't disagree, but I do see a reason to upgrade if you're in a situation like me where you want to play what few good games are left in 4K at a reasonable frame rate, without breaking the bank.
Senior Member
Posts: 11808
Joined: 2012-07-20
each chiplet sold individually
also,am I the only one here thinking that bringing a mcm design gpu earlier than expeccted is more likely with miners not gamers in mind ?
It is not early. When technology is ready and not used, it is called: "Late to the Game."
Issue of this rumor is quite different. It is in statement of having 2x 80CUs.
Considering AMD's patent where only IC gets to sit on interconnect chiplet. (Possibly some IO / Multimedia-engine. As patent is about caching coherency between chiplets which enables them to behave as they are just one GPU.)
There will not be huge saving of transistor count for each 80CU chiplet. This means space based improvement of yields will be minimal. And most of it will come from manufacturing technology itself. So we can expect to have minimal or no improvement to price tag per die.
And as result, manufacturing card with one GPU Chiplet and one Caching chiplet will cost about same as current 6800(XT)/6900 XT. Perform like 30% better. And delivers minimal improvement improvement to performance per $ for end user.
So even if that 2 GPU chiplet + caching chiplet card delivered 2.5~3x performance of 6900 XT, it will likely cost proportionally more too.
Yet having chiplet with 40CUs could make for 150~180mm^2 dies. Improved yields and reduced final price tags.
Smaller chiplets would enable ~1.3 times 6900 XT performance at price of 6800 XT which would be good improvement to performance per $.