AMD James Prior Sheds Light on Threadripper Dummy Dies
AMD's Senior Product Manager, James Prior talked a bit about the story that broke earlier on last week related to the fact that Threadripper is fitted with four AMD Zepplin dies. He mentions that the two extra dies have "no path to operation".
That also means you cannot activate 32-cores of course and that EPYC is a different processor (but sharing the same design). Prior outted his remarks on twitter:
Threadripper is not a Epyc processor. Different substrate, different dies. 2 dies work, other 2 have no path to operation. Basically rocks.
Prior also added that AMD decided to use the term "dummy" instead of "inactive" to describe Threadripper's additional dies as there is no way of utilizing/activating these additional CPU dies.
Yes, exactly why they're not described as inactive, but dummy. Doesn't matter if they were dead, or active, they're not going to work.
Earlier on overclocker der8auer tried to de-lid a Threadripper, but with the heatspreader soldered to the dies he broke that CPU (of course). In his video he took it a step further and check out the dummy dies. When he pealed them loose, the four dies revealed themselves, opposed to some sort of two die / two dummy configuration. James Prior however still has not mentioned as to why exactly they are using two extra dies? But likely, the ones used did not pass wafer inspections, e.g. they are non working dead and thus re-used
Senior Member
Posts: 11621
Joined: 2010-12-27
Considering how many SKUs of thread ripper there are at 16cores or less, the smartest and cheapest long run solution would to be to rearrange the two dies to a top/bottom layout.
What happens when they get yields to very high rates to which they have limited amount of dead dies?
Are they going to go out of stock on threadripper?
Doesn't make any sense the route they went.
Senior Member
Posts: 7261
Joined: 2012-11-10
I'm assuming what you actually meant was "remove the 2 dummy dies and position the 2 functioning dies in the center of the package, side-by-side". In one perspective, what you said may be true. But there are some things to consider. For example, the dummy dies are literally waste products, so I don't think their current layout is a whole lot more expensive than you may think. Meanwhile, their current layout is pretty good at dissipating heat. The two cores are pretty much as far away from each other as they can get, so heat won't be concentrated in one small spot. Also, the sheer size of the package may limit where each die can be positioned. Keep in mind TR and Epyc mostly share the same socket, so "downscaling" the Epyc design may be cheaper than re-arranging the layout.
Then they'll use actual blanks with legitimately no transistors. Or, they could just wedge a piece of steel in there for even better thermal dissipation at a negligible price. This isn't that complicated.
Doesn't make any sense the route they went.
TR isn't that high-demand of a product. Most people aren't willing to spend $550+ on a single CPU.
The only part of AMD's route that doesn't make sense is how the CCXs seem to need symmetry. They used this multi-die system to help reduce costs, but the design of the CCX must contribute a lot of waste.
Senior Member
Posts: 2068
Joined: 2017-03-10
Considering how many SKUs of thread ripper there are at 16cores or less, the smartest and cheapest long run solution would to be to rearrange the two dies to a top/bottom layout.
What happens when they get yields to very high rates to which they have limited amount of dead dies?
Are they going to go out of stock on threadripper?
Doesn't make any sense the route they went.
On the contrary, I think it's the most cost-effective solution. Threadripper is a derivative of EPYC and likely uses the same production line. To do what you're suggesting would mean creating a separate line specifically for TR, which would take both time and money.
Junior Member
Posts: 1
Joined: 2017-09-19
"Won't be able to activate" doesn't mean "TR4 socket can't run on all-4-die-activated chips".
It might only runs with 4 channels of memory, and the other 2 non-channeled dies have to rely on the 2 channeled memory controller for data feed; therefore, more latencies, but it doesn't matter much for heavily threaded tasks and this what high-core CPUs are meant for.
We won't see full 32-core CPUs for "The Rippers", yes "The Rippers", any time in the next 10 years, no way; but 6-core x4 = 24 and 8-core x3 = 24 are possible if Intel manages to release its 18-cores with all-core boost clock goes beyond 3.0GHz. An 18-core monolithic CPU could be as big as a Vega chip, no joke. It is gonna be hard to harvest good chips that clock well.
Senior Member
Posts: 162
Joined: 2017-09-12
Oh that makes total sense, I haven't thought of that.