AMD's Future Developments: Ryzen 8000 and Navi 3.5

Published by

Click here to post a comment for AMD's Future Developments: Ryzen 8000 and Navi 3.5 on our message forum
https://forums.guru3d.com/data/avatars/m/198/198862.jpg
So the GPU refreshes are coming. I just want AMD to bring more skus to the market.
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
yeah, the fixed rdna3 cards are a new series now. seems no 7800 until 2024 too. first they will offload 7900s like they did with 6800s/6900s, for peanuts, ruining their resale value for those who paid msrp. don't bother me, I'll take a lightly used 7900xt for 400-500 when new sell for 600, it's 1.6x of my 6800 in rasterization and more in rt.
https://forums.guru3d.com/data/avatars/m/156/156348.jpg
Undying:

So the GPU refreshes are coming. I just want AMD to bring more skus to the market.
They should probably just kill RDNA 3 and start working on RDNA 4. RDNA 2 was quite decent and underrated by most people imo. But RDNA 3 has been a major flop for now. When nVidia released the 4090 AMD should have admitted defeat and should have done like they did with RDNA 1 and release only mid range cards at decent price. They have a long way to go before even challenging nVidia. I really hope someone will challenge nVidia soon cause i'm done with those 8GB 500+$ CAD cards like i was done with 4C/4T cpus back when Ryzen was released. I'll wait next gen and hope i'll be able to have a decent upgrade in the 600$ CAD range with 16GB of vram and at a 256bit bus. Pretty sure it wont be nVidia unless they are challanged.
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
MonstroMart:

They should probably just kill RDNA 3 and start working on RDNA 4.
rdna 3 feels like it is plagued by issues they could not fix by release so it lowered the performance ... so a 3.5 has a slim chance of surprising us i would say ...on the other hand ... might be 100% as indented though ...so well in that case you are right :P
https://forums.guru3d.com/data/avatars/m/196/196426.jpg
MonstroMart:

They should probably just kill RDNA 3 and start working on RDNA 4.
And sell nothing until then ? That's not very business wise, assuming they figured out what went wrong with RDNA 3 and fixed it for this "3.5" refresh. If it has the performance that "3" should have had from the start, it will be quite a competitive architecture, really not far behind similarly sized nvidia chips, or actually, it might be better than them, considering that all NV chips below the 4090 are actually smaller than their name.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
MonstroMart:

RDNA 2 was quite decent and underrated by most people imo. But RDNA 3 has been a major flop for now. When nVidia released the 4090 AMD should have admitted defeat and should have done like they did with RDNA 1 and release only mid range cards at decent price. They have a long way to go before even challenging nVidia.
RDNA2 was good but AMD priced it as though it had RDNA3 video transcoding and raytracing performance. RDNA 1 to me seemed like a prototype. It wasn't mature enough to release a high-end version but they needed to release something. RDNA 3 to me isn't a flop. The 7900s were really just meant to milk AMD fans, because so long as the RT performance is inferior to Nvidia, it doesn't really make sense to spend that much money and not just opt for Nvidia. The 7600 was "just okay" in terms of value [by modern standards] but being more price focused, it tends to make more sense to just get an older GPU. What matters most is the mainstream models, which is the 7700s and 7800s - that's the market AMD does best in, that's what can really stand out, and that's what hasn't been released yet. The biggest issue with RDNA3 (as is most of AMD's new releases) is the drivers are terrible, but this time worse than usual.
I really hope someone will challenge nVidia soon cause i'm done with those 8GB 500+$ CAD cards like i was done with 4C/4T cpus back when Ryzen was released. I'll wait next gen and hope i'll be able to have a decent upgrade in the 600$ CAD range with 16GB of vram and at a 256bit bus. Pretty sure it wont be nVidia unless they are challanged.
This i definitely agree with.
https://forums.guru3d.com/data/avatars/m/216/216349.jpg
MonstroMart:

They should probably just kill RDNA 3 and start working on RDNA 4. RDNA 2 was quite decent and underrated by most people imo. But RDNA 3 has been a major flop for now. When nVidia released the 4090 AMD should have admitted defeat and should have done like they did with RDNA 1 and release only mid range cards at decent price. They have a long way to go before even challenging nVidia. I really hope someone will challenge nVidia soon cause i'm done with those 8GB 500+$ CAD cards like i was done with 4C/4T cpus back when Ryzen was released. I'll wait next gen and hope i'll be able to have a decent upgrade in the 600$ CAD range with 16GB of vram and at a 256bit bus. Pretty sure it wont be nVidia unless they are challanged.
AMD could have just shrunk RDNA 2 to a smaller node, it would be better than current RDNA 3 without too much effort, and it would provide good competition agains Nvidia. The problem is that AMD can`t afford hte giant chips needed to power their higher end GPUs, specially if the same chips don`t deliver the necessary sales to cover the cost of the chips, like the 6900 cards. So they came up with the chiplet design, a much cheaper solution, at least in theory... The problem is that it`s going to take time to refine the chiplet approach, resulting in current GPUs (7900XT/XTX) that are good but nothing more. In a certain way, the current 7900`s are experimental/beta versions of future GPUs that are supposed to be much better, just like RDNA1 was the stepping stone for RDNA2. So we have to be patient regarding AMD`s GPUs...
data/avatar/default/avatar04.webp
H83:

AMD could have just shrunk RDNA 2 to a smaller node, it would be better than current RDNA 3 without too much effort, and it would provide good competition agains Nvidia. The problem is that AMD can`t afford hte giant chips needed to power their higher end GPUs, specially if the same chips don`t deliver the necessary sales to cover the cost of the chips, like the 6900 cards. So they came up with the chiplet design, a much cheaper solution, at least in theory... The problem is that it`s going to take time to refine the chiplet approach, resulting in current GPUs (7900XT/XTX) that are good but nothing more. In a certain way, the current 7900`s are experimental/beta versions of future GPUs that are supposed to be much better, just like RDNA1 was the stepping stone for RDNA2. So we have to be patient regarding AMD`s GPUs...
But the whole chiplet design claims are just smoke, they did nothing more than what they did on their X3D CPUs: Added external L3 cache. Instead of stacking it on top, they just positioned it to the side. A true chiplet design would entail creating smaller GPU tiles that contain a specific number of GPU resources (compute units, L2 cache etc) that could be glued together to create GPUs of specific sizes and thus reduce costs for the bigger GPUs as the tiles could be easier to manufacture than complex large GPU chips, with more yield. Much like they did with the Zen architectures. But that is maybe a technological step too far for them at the moment.
https://forums.guru3d.com/data/avatars/m/164/164033.jpg
cucaulay malkin:

yeah, the fixed rdna3 cards are a new series now. seems no 7800 until 2024 too. first they will offload 7900s like they did with 6800s/6900s, for peanuts, ruining their resale value for those who paid msrp. don't bother me, I'll take a lightly used 7900xt for 400-500 when new sell for 600, it's 1.6x of my 6800 in rasterization and more in rt.
Might just do the same but with 7900xtx
https://forums.guru3d.com/data/avatars/m/216/216349.jpg
Crazy Joe:

But the whole chiplet design claims are just smoke, they did nothing more than what they did on their X3D CPUs: Added external L3 cache. Instead of stacking it on top, they just positioned it to the side. A true chiplet design would entail creating smaller GPU tiles that contain a specific number of GPU resources (compute units, L2 cache etc) that could be glued together to create GPUs of specific sizes and thus reduce costs for the bigger GPUs as the tiles could be easier to manufacture than complex large GPU chips, with more yield. Much like they did with the Zen architectures. But that is maybe a technological step too far for them at the moment.
Creating a modern GPU is already complex as hell, so they have to move slowly regarding their new approach. The chiplet idea is not something made for the present, it`s a future thinking approach to the problem of building high end GPUs, and it`s going to take a few years to see the complete result of this. And if they are already having problems with this simple implementation, we can imagine how hard it`s going to be moving foward.
https://forums.guru3d.com/data/avatars/m/181/181063.jpg
H83:

Creating a modern GPU is already complex as hell, so they have to move slowly regarding their new approach. The chiplet idea is not something made for the present, it`s a future thinking approach to the problem of building high end GPUs, and it`s going to take a few years to see the complete result of this. And if they are already having problems with this simple implementation, we can imagine how hard it`s going to be moving foward.
Maybe AMD idea of chiplets is the wrong one - maybe another architecture is needed and this one is not for the future but for the garbage bin.... Nvidia will go monolithic another generation at least and who knows? maybe they will drop the monolithic design for another thing - not chiplets, or not AMD idea of chiplets - maybe... Lots of speculations and I remembered that the root of evil for AMD GPU's is from ATI times - they genuinely believed that Nvidia 200 will be the last monolithic gpu from Nvidia - their big mistake: https://www.techpowerup.com/63216/ati-believes-geforce-gtx-200-will-be-nvidias-last-monolithic-gpu Also something to laugh - a blast from the past - someone seeing Intel chiplets gpu's as a "huge" treath for Nvidia: https://seekingalpha.com/article/4321159-nvidia-faces-huge-threat-from-intels-chiplet-gpu-approach I especially liked this one:
Simply put, this will put pressure on Nvidia’s market share, and the company will likely have to react with more aggressive pricing, which will also put pressure on its gross margins.
:D That proves how wrong predictions are in this industry.
data/avatar/default/avatar30.webp
AMD: "So bla bla bla, bla bla bla & and in future you'll get more energy efficiency, but less performance for more money, bla bla bla and yes, Jen-Hsun Huang agreed on that point as well... any questions? (All hands are going up) Nothing? ok, thanks and have a nice evening..." #sarcasm off... 🙂
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
so many folks are roasting the chiplet idea - but if you're looking to lower gpu prices overall this is the correct path. yes, 1st gen anything will have issues, i'm not pretending they don't exist. the truth of IT manufacturing has "chiplets" as the present and future. don't ignore that all cpu's of the last two generations (incl. Intel) are chiplet based. and the quality has gone up and the prices have come down. the ryzen 7 7800X3D has exactly the same retail price as the old Intel 4 core Q6600 (Core Duo Quad). yes gpus are more difficult and the teething pains obvious - it doesn't stop a 7900xtx from kicking the A from everything exccept the 4090 and it still is competitive vs 4080 for less. i'm very curious about the refresh mainly because the new TR has new IF... and if the refresh has the new IF that could be spicy
https://forums.guru3d.com/data/avatars/m/108/108389.jpg
RDNA3 uarch is a major failure, kinda like Vega. No idea how RTG can tackle that many problems (both uarch improvement and chiplets design) with way less R&D budget than Nvidia, perhaps just wishful thinking from RTG team.
https://forums.guru3d.com/data/avatars/m/248/248291.jpg
Crazy Joe:

But the whole chiplet design claims are just smoke, they did nothing more than what they did on their X3D CPUs: Added external L3 cache. Instead of stacking it on top, they just positioned it to the side. A true chiplet design would entail creating smaller GPU tiles that contain a specific number of GPU resources (compute units, L2 cache etc) that could be glued together to create GPUs of specific sizes and thus reduce costs for the bigger GPUs as the tiles could be easier to manufacture than complex large GPU chips, with more yield. Much like they did with the Zen architectures. But that is maybe a technological step too far for them at the moment.
The goal of the L3 cache on the big RDNA3 is not performance, but to save cost. The dies used on the L3 cache are still on N7, and unlike RDNA2, are separate chips. And there is a latency spike when hit hits the L3. Something that does not happen on RDNA2 or NAVI33. I still does what it's supposed to do, as a memory attached last level, it reduces fetches to vram. But for other things it won't do. For example, that cache latency would compromise performance on the whole GPU, if they used chiplets for an L2. Even for WGPs, it might be problematic as there would be the need to have a general front end, to issue work waves without contention.
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
tunejunky:

the ryzen 7 7800X3D has exactly the same retail price as the old Intel 4 core Q6600 (Core Duo Quad).
except then 4 cores was a luxury high end item, while now 8/16 is pretty much the standard, and a competing 13700kf costs less, performs the same in games and almost 30% better in applications. actually, I'm wrong here. the competition is not 13700k, but a $330 13700 since 7800x3d is locked too and tops at 6000mhz ddr5.
13700.jpg
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
Thanks to the Radeon RX 7600's Navi 33 XL GPU being a monolithic chip it seems to outpace—in terms of cache and memory latency performance—chiplet-based designs as featured in the vastly more powerful (and expensive) Radeon RX 7900-series cards. Chips and Cheese reports that AMD's RX 7900 XTX takes up to 58% longer to access and pull data from its pool of Infinity Cache, when contrasted with the recently released sibling.
good grief, this mcm design is getting better and better. what's worse, n32 is mcm cache too. 7800xt in 2024 slower than 6800xt ?
https://forums.guru3d.com/data/avatars/m/198/198862.jpg
AMD will eventually fix all issues with rdna3.5, drivers will get better, fsr3 will become a thing and if we are lucky they'll introduce some good midrange cards like 7700/7800xt with atleast 16gb vram. Im interested becouse nvidia current 40series lineup is lacking and pricy. On the other hand zen5 looks great, their cpu division is killing it. I will replace my ryzen 7600x with something like 8700x cpu with more cores instead of going zen4d.
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
Undying:

AMD will eventually fix all issues with rdna3.5, drivers will get better, fsr3 will become a thing and if we are lucky they'll introduce some good midrange cards like 7700/7800xt with atleast 16gb vram.
I hope my grandchildren will live to see it.