AMDs chiplet design allows them to reduce cost of the Ryzen 9 3950X by half

Published by

Click here to post a comment for AMDs chiplet design allows them to reduce cost of the Ryzen 9 3950X by half on our message forum
https://forums.guru3d.com/data/avatars/m/259/259067.jpg
Lets not forget what these ...noobs says: At its Xeon CPU launch event (2017), Intel accused AMD’s Epyc chips as being an “inconsistent” and “repurposed desktop product” with “glued-together” dies. And,AMD SVP Scott Aylor addressed the issue head on. He noted that while “there’s a theory out there that EPYC is just 4 desktop processors glued together“. Once you look at the processor as a whole, it should become clear that “this is not a glued together desktop processor“. Aylor also noted that the company could have built a monolithic part but it would involve “trade-offs that would [drag] performance down because it would [be] too large and too difficult to manufacture“.
https://forums.guru3d.com/data/avatars/m/247/247876.jpg
Good for them (and for customers). And now when the guy behind that design has moved to Intel (as I take it) we will look at Intel`s future designs...
https://forums.guru3d.com/data/avatars/m/216/216349.jpg
For me the chiplet design is the real advantage Ryzen has over Intel parts right now. The ability to put more cores at a cheaper price gives them an unsurpassable advantage over the competition. The problem is that sooner or later Intel is going to do the same and then AMD will lose their lead...
https://forums.guru3d.com/data/avatars/m/254/254132.jpg
H83:

For me the chiplet design is the real advantage Ryzen has over Intel parts right now. The ability to put more cores at a cheaper price gives them an unsurpassable advantage over the competition. The problem is that sooner or later Intel is going to do the same and then AMD will lose their lead...
And that my friend is called competition. That's what we've been asking for, for a long time.
https://forums.guru3d.com/data/avatars/m/220/220188.jpg
well i hope intel takes at least 2 more years to catch up so AMD can at least have a third of the marketshare by then, if they come back today our long hoped competition is dead, back to business as usual ryzen might be a no brainer upgrade for us, but big business would rather buy yet more intel servers to compensate instead of going amd, amd needs a lot more time to change that
data/avatar/default/avatar10.webp
"AMDs chiplet design allows them to reduce cost of the Ryzen 9 3950X by half" And here's the thing, 1 year ago, if this design was Intel's guess how much they would be fleecing the customer? At least double what AMD are asking?
https://forums.guru3d.com/data/avatars/m/163/163068.jpg
You could make a chiplet design with Bulldozer parts, but it would still suck... cheaper but suck. Ryzen's advantage now is the chiplet design and overall core and I/O design. A monolithic Ryzen CPU would be almost as good... perhaps not in the clocks or price, but the IPC might even be a lot better.
data/avatar/default/avatar24.webp
Turanis:

Lets not forget what these ...noobs says: At its Xeon CPU launch event (2017), Intel accused AMD’s Epyc chips as being an “inconsistent” and “repurposed desktop product” with “glued-together” dies. And,AMD SVP Scott Aylor addressed the issue head on. He noted that while “there’s a theory out there that EPYC is just 4 desktop processors glued together“. Once you look at the processor as a whole, it should become clear that “this is not a glued together desktop processor“. Aylor also noted that the company could have built a monolithic part but it would involve “trade-offs that would [drag] performance down because it would [be] too large and too difficult to manufacture“.
Its more realistically the other way around, with the desktop parts being cut down server dies, with their ECC support intact.
https://forums.guru3d.com/data/avatars/m/216/216349.jpg
umeng2002:

You could make a chiplet design with Bulldozer parts, but it would still suck... cheaper but suck. Ryzen's advantage now is the chiplet design and overall core and I/O design. A monolithic Ryzen CPU would be almost as good... perhaps not in the clocks or price, but the IPC might even be a lot better.
Of course, if the cores are garbage, adding more won´t make for a great CPU... But the good thing about the chiplet design is that AMD can add more cores easily, creating better products than Intel for the same price. For example if we have an 8 core AMD CPU against a similar 8 core Intel CPU, they are going to be evenly matched, trading blows depending of the benchmarks. But thanks to the chiplet design, AMD can offer mores at the same price than Intel products with less cores, providing a better product in the end. For me chiplet design is a masterstroke from AMD that completely "destroyed" Intel this round. And until Intel presents a similar design, AMD is going to stay on top.
https://forums.guru3d.com/data/avatars/m/270/270008.jpg
Intel made huge cuts to server CPU prices which we know is going to hit their bottom line especially with those more costly monolithic designs. This plays into AMD's hand as they can still make more per chip even with lower prices. I truly believe once Zen3 is out Intel is going to be in a world of hurt for 1-2 years. If Zen3 really does bring in decent IPC changes and the 7nm+ does hit a little higher frequencies(it should due to EUV vs quad patterning) then AMD will have better performance regardless if its very high FPS games or what. Intel's Ice Lake(To clarify desktop/Server parts that are not out as of yet.) may be competitive so will have to see how this comes out in the wash. Intels EMIB and Foveros are extremely elegant solutions far more advanced than anything AMD has to date. The ability to stack of silicon along with interconnected chiplets is lovely but we are still 2+ years from Intel having a desktop or server part. intel is projecting 2022 to have EMIB on 7nm using EUV which I tend to believe them this time around. Intel does not tend to fail more than once in a row at least if we look back at history.
data/avatar/default/avatar36.webp
the IO die is made on bigger node, if AMD shrinks it to 7nm, then it will have enough space for 3 chiplets and IO die for 24 core CPU, but AMD preferred to keep the IO die huge, to make us want to upgrade again in 2020 when they will introduce 24 core home CPU and the 1300EUR 24 core TR will become irrelevant and lose half of its price if not more. AMD is exactly like Intel, all Corporations are the same, they not our friends and all want to suck as much money from us as possible, only idiots believe that one corp is better then another and even worse morons believe that evil greedy corporation can be their friend worth fanboying and trolling online.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
MegaFalloutFan:

the IO die is made on bigger node, if AMD shrinks it to 7nm, then it will have enough space for 3 chiplets and IO die for 24 core CPU, but AMD preferred to keep the IO die huge, to make us want to upgrade again in 2020 when they will introduce 24 core home CPU and the 1300EUR 24 core TR will become irrelevant and lose half of its price if not more. AMD is exactly like Intel, all Corporations are the same, they not our friends and all want to suck as much money from us as possible, only idiots believe that one corp is better then another and even worse morons believe that evil greedy corporation can be their friend worth fanboying and trolling online.
AMD kept the IO die on 14nm because they have a contract with GF for a specific number of wafers per year that they had not met but were paying for anyway. I'm also not convinced that AM4 can support 24 cores. Yes all corporations exist to make money and I think people should consider that - but you're delusional if you don't think that some companies are worse than others.
data/avatar/default/avatar33.webp
Denial:

AMD kept the IO die on 14nm because they have a contract with GF for a specific number of wafers per year that they had not met but were paying for anyway. I'm also not convinced that AM4 can support 24 cores. Yes all corporations exist to make money and I think people should consider that - but you're delusional if you don't think that some companies are worse than others.
No, they left the IO on 14mn for both power and cost. AMD altered their WSA with GF late last year. https://www.anandtech.com/show/13915/amd-amends-agreement-with-globalfoudries-set-to-buy-wafers-till-2021 But yes, all companies are not the same in how they enact their charter to maximize profit for the investor.
data/avatar/default/avatar28.webp
mbk1969:

Good for them (and for customers). And now when the guy behind that design has moved to Intel (as I take it) we will look at Intel`s future designs...
There was a whole team. Long gone the days of one man band. Laura Smith which was moved as chief engineer and Senior Director to Radeon Group at the end of 2018, is one of the Zen designers also. And took with her to RTG several of the Zen 2 engineers to work on RDNA2 and beyond.
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
Fediuld:

There was a whole team. Long gone the days of one man band. Laura Smith which was moved as chief engineer and Senior Director to Radeon Group at the end of 2018, is one of the Zen designers also. And took with her to RTG several of the Zen 2 engineers to work on RDNA2 and beyond.
which is why with scalable rdna2 AMD will (eventually with Infinity fabric gen 3) make a chiplet gpu again the same exact factors - smaller dies = higher yields. excepting this time is to b*tch-slap Nvidia instead of Intel. Nvidia isn't complacent like Intel, however they rely entirely too much on monolithic dies and have no technological alternative like AMD. this isn't to say chiplet gpu's don't have issues to date - which is why they haven't been on market. but this design team has been specifically tasked to fix the problems. this is the AMD end game on the gpu front. once they achieve a fast enough i/o and an upgraded interposer all the advantages Nvidia has had in gpu design will go by the wayside (relatively speaking) as AMD will be able to brute force any spec at a far lower cost widening the delta between them price wise.
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
tunejunky:

which is why with scalable rdna2 AMD will (eventually with Infinity fabric gen 3) make a chiplet gpu again the same exact factors - smaller dies = higher yields. excepting this time is to b*tch-slap Nvidia instead of Intel. Nvidia isn't complacent like Intel, however they rely entirely too much on monolithic dies and have no technological alternative like AMD. this isn't to say chiplet gpu's don't have issues to date - which is why they haven't been on market. but this design team has been specifically tasked to fix the problems. this is the AMD end game on the gpu front. once they achieve a fast enough i/o and an upgraded interposer all the advantages Nvidia has had in gpu design will go by the wayside (relatively speaking) as AMD will be able to brute force any spec at a far lower cost widening the delta between them price wise.
https://www.guru3d.com/news-story/nvidia-next-gen-gpu-hopper-could-be-offered-in-chiplet-design.html Sure AMD's ahead by quite some, but Nvidia's R&D budget is 10 times as big as AMD's, and in comparison to Intel, they have been able to keep the lead up until now. But you do have a valid point, it will be interesting to see what they do and if they can take the fight to Nvidia. Wouldn't hurt to have such an escalation of competition with GPUs as well.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
tunejunky:

which is why with scalable rdna2 AMD will (eventually with Infinity fabric gen 3) make a chiplet gpu again the same exact factors - smaller dies = higher yields. excepting this time is to b*tch-slap Nvidia instead of Intel. Nvidia isn't complacent like Intel, however they rely entirely too much on monolithic dies and have no technological alternative like AMD.
Nvidia published numerous papers about MCM, they have working MCM prototypes and have been gearing NVLink for MCM designs, etc. I don't understand how you can say they don't have an alternative - they've arguably demonstrated more than AMD has in terms of MCM for GPUs.