TSMC 3nm Yields Between 60% and 80%

Published by

Click here to post a comment for TSMC 3nm Yields Between 60% and 80% on our message forum
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
Well, looks reasonable. As far as I understood, yields under 60% were hard to be commercially feasable in the past, maybe now with the Covid / war markups. Now with Samsung announcing 3nm might be somewhat of a PR stunt, but selling those at a commercially high price might be an investment in the market segment, much more than earning a lot of money with their 3nm. On the other hand TSMC has a history of offering commercial wafers only after ~60% yield, I've read somewhere some time ago... so I would believe that.
https://forums.guru3d.com/data/avatars/m/248/248291.jpg
Samsung's and TSMC's 3nm processes are not the same. There is one big difference. Samsung is already using GAA. But TSMC is still using FinFet. This means that Samsung's node is more advanced, but also more difficult to produce. So, lower yields.
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
Horus-Anhur:

Samsung's and TSMC's 3nm processes are not the same. There is one big difference. Samsung is already using GAA. But TSMC is still using FinFet. This means that Samsung's node is more advanced, but also more difficult to produce. So, lower yields.
Absoluteley true. But low yields also mean low net output of the process, hence, not that much money in comparison to more mature, high volume nodes. Not yet, that is. In the end, they will not offer 3nm forever unless they can't up the yields in the mid / long run.
https://forums.guru3d.com/data/avatars/m/248/248291.jpg
TSMC has always been relatively conservative with it's advancements. They might not have a history of disruptive process nodes, but they have a history of stable and constant delivery.
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
200mm2 gpus coming at $1000 in 2024.
https://forums.guru3d.com/data/avatars/m/248/248291.jpg
cucaulay malkin:

200mm2 gpus coming at $1000 in 2024.
The thing that scares me is that recently, the CEO of ASML stated that we might be close to the end of lithography. We might hit a point where the economics of new process nodes is no longer feasible, sooner than a point where the physics make it impossible to progress further.
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
Horus-Anhur:

The thing that scares me is that recently, the CEO of ASML stated that we might be close to the end of lithography. We might hit a point where the economics of new process nodes is no longer feasible, sooner than a point where the physics make it impossible to progress further.
we'll see chiplet design matter then, but until that time comes, prepare to pay through the nose for new nodes on both nvidia and amd.
https://forums.guru3d.com/data/avatars/m/248/248291.jpg
cucaulay malkin:

we'll see chiplet design matter then, but until that time comes, prepare to pay through the nose for new nodes on both nvidia and amd.
I wouldn't be surprised if in future generations, high end GPUs and CPUs are made in the leading edge process node. But mid range GPUs and CPUs are made in an older, cheaper process node. And also, lots of chiplets, each with different nodes.
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
Horus-Anhur:

I wouldn't be surprised if in future generations, high end GPUs and CPUs are made in the leading edge process node. But mid range GPUs and CPUs are made in an older, cheaper process node. And also, lots of chiplets, each with different nodes.
Uhm I agree with you but isn't this already how it is? I mean, nobody besides GPUs and CPUs for consumer electronics uses 3nm nodes, right? We did see GPUs of older gens (and nodes) rebranded as cheaper alternatives, although frefreshed in board design / ram maybe? Chiplets, that's where I think they will use them more, but in reality, isn't it here, just with the current PC hardware? AMD's design with seperate chiplets in GPUs, and Intel's E-Cores which obviously scream for being manufactured on cheaper nodes besides a full working new node die to actually do high work loads than can't be threaded easily or quickly enough? I wouldn't bet my money one a 5nm chip being used in e.g. the James Webb telescope... I think, you have predicted it correctly, and it's already here, at least I feel like it
https://forums.guru3d.com/data/avatars/m/248/248291.jpg
fantaskarsef:

Uhm I agree with you but isn't this already how it is? I mean, nobody besides GPUs and CPUs for consumer electronics uses 3nm nodes, right? We did see GPUs of older gens (and nodes) rebranded as cheaper alternatives, although frefreshed in board design / ram maybe? Chiplets, that's where I think they will use them more, but in reality, isn't it here, just with the current PC hardware? AMD's design with seperate chiplets in GPUs, and Intel's E-Cores which obviously scream for being manufactured on cheaper nodes besides a full working new node die to actually do high work loads than can't be threaded easily or quickly enough? I wouldn't bet my money one a 5nm chip being used in e.g. the James Webb telescope... I think, you have predicted it correctly, and it's already here, at least I feel like it
Not exactly. We are starting to see that, but the price increase we are seeing is artificial. Foundries increased prices on all process nodes, during the pandemic and mining boom. Demand was high, so it was logical. Even an old node like N7 got a price hike of 30-40%. When it should have gone down, if it wasn't for the world living in such a weird situation. This price hike affected all nodes, so on top of the extra cost of producing a more difficult node like N5 and N3, we also got an extra increase in price because of high demand. But demand has crashed hard throughout the industry. Foundries for NAND, RAM and other chips are reduced their prices. But not TSMC. They are still trying to hang to the extra margin of the 2020-2022. We are at the start of what I was talking about. But exaggerated by the big margins that TSMC is trying to maintain. Chiplets is a great solution to keep costs down. I bet the 7900XTX has a much bigger margin of profit for AMD, than a 4080 for NVidia. Too bad AMD can't sell as many GPUs, as NVidia. James Webb is in space, so it's exposed to a lot of radiation. A chip made in a very small node is bound to have many more bit flips. If they were to use n5 for JW, it would need some serious shielding.
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
Horus-Anhur:

Not exactly. We are starting to see that, but the price increase we are seeing is artificial. Even an old node like N7 got a price hike of 30-40%. When it should have gone down, if it wasn't for the world living in such a weird situation. This price hike affected all nodes, so on top of the extra cost of producing a more difficult node like N5 and N3, we also got an extra increase in price because of high demand. We are at the start of what I was talking about. But exaggerated by the big margins that TSMC is trying to maintain.
yeah tsmc prices are ridiculous, look at their operating costs to net profit ratio. they make 50% profit after tax. holy f**k.
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
Horus-Anhur:

Not exactly. We are starting to see that, but the price increase we are seeing is artificial. Foundries increased prices on all process nodes, during the pandemic and mining boom. Demand was high, so it was logical. Even an old node like N7 got a price hike of 30-40%. When it should have gone down, if it wasn't for the world living in such a weird situation. This price hike affected all nodes, so on top of the extra cost of producing a more difficult node like N5 and N3, we also got an extra increase in price because of high demand. But demand has crashed hard throughout the industry. Foundries for NAND, RAM and other chips are reduced their prices. But not TSMC. They are still trying to hang to the extra margin of the 2020-2022. We are at the start of what I was talking about. But exaggerated by the big margins that TSMC is trying to maintain. Chiplets is a great solution to keep costs down. I bet the 7900XTX has a much bigger margin of profit for AMD, than a 4080 for NVidia. Too bad AMD can't sell as many GPUs, as NVidia. James Webb is in space, so it's exposed to a lot of radiation. A chip made in a very small node is bound to have many more bit flips. If they were to use n5 for JW, it would need some serious shielding.
Well, you are right with the telescope of course, it was just a suggestion. But even today, cars don't have 5nm computers for their functions (yet), and that doesn't need shielding. Or the general chips that keep the car manufacturers back, they don't book N5 at TSMC for power controllers for instance. These chips are still way back when it comes to nodes, simply because they don't have to be that good, power efficient, small, etc. And yeah, prices are something anyway, with record inflation not seen in many decades, and war times being the current thing...
https://forums.guru3d.com/data/avatars/m/248/248291.jpg
fantaskarsef:

Well, you are right with the telescope of course, it was just a suggestion. But even today, cars don't have 5nm computers for their functions (yet), and that doesn't need shielding. Or the general chips that keep the car manufacturers back, they don't book N5 at TSMC for power controllers for instance. These chips are still way back when it comes to nodes, simply because they don't have to be that good, power efficient, small, etc. And yeah, prices are something anyway, with record inflation not seen in many decades, and war times being the current thing...
I guess that cars can have chips in the leading edge nodes. Maybe not for all basic functions. But one N5 chip costing 100$ would mean nothing in the cost of a car that costs tens of thousands. With AI and self driving cars, a leading node could be justified.
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
Horus-Anhur:

I guess that cars can have chips in the leading edge nodes. Maybe not for all basic functions. But one N5 chip costing 100$ would mean nothing in the cost of a car that costs tens of thousands. With AI and self driving cars, a leading node could be justified.
Yeah they could probably use better hardware, they're not much behind. After all, I believe the cards in most cars right now are behind 2 or 3 nodes only. What I don't have on my radar is, what computers and chips do run in airplanes? Big container ships? That's also a huge market, at least in revenue I guess.
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
Space stuff are often A LOT slower because they go for reliability to extreme... I think the latest Mars Rovers where xinlinx 200mhz CPUs at 80 or 120 NM .... So they do not give a darn about alpha particles flipping bits and cosmic radiation!
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
Also, all the space stuff has ECC iirc
https://forums.guru3d.com/data/avatars/m/248/248291.jpg
NASA is waiting for someone to develop the Bio-neural gel pack, ala Start Trek 😀
https://forums.guru3d.com/data/avatars/m/233/233814.jpg
Meanwhile 10nm Raptor Lake crushing Zen 5.
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
Han2K:

Meanwhile 10nm Raptor Lake crushing Zen 5.
You got a time machine ? :O
https://forums.guru3d.com/data/avatars/m/273/273323.jpg
Han2K:

Meanwhile 10nm Raptor Lake crushing Zen 5.
Sorry, but what do you mean? My understanding was that Intel's nm didn't equate to TSMC's nm really at all.