TSMC Ramping up 2nm Wafer Fabrication Development

Published by

Click here to post a comment for TSMC Ramping up 2nm Wafer Fabrication Development on our message forum
https://forums.guru3d.com/data/avatars/m/235/235398.jpg
Is there a size limit where it can't physically get any smaller?
https://forums.guru3d.com/data/avatars/m/16/16662.jpg
Administrator
Brasky:

Is there a size limit where it can't physically get any smaller?
They're closing in on that limit and it remains fascinating to see and follow. There are a lot of studies on that and there is no clear answer, but I don't think we'll pass anything smaller than 2nm anytime soon. Crazy when you think about the sizes, I mean a strand of your DNA is 2.5nm. There has been a prototype of 1nm established however transistors are getting closer and closer to atom sizes and to carve that into a silicon wafer is nearly impossible (I think I read somewhere that 70 atoms are currently the smallest design). Once that threshold is reached, the next step is not fabbing smaller wafers and optimizations (e.g. like Finfet++++), but the future is computing technology like quantum bit computing. Then again I also heard that some researchers have been able to make a transistor 167 picometres in diameter that's like 0.167nm .. technology always evolves.
https://forums.guru3d.com/data/avatars/m/196/196426.jpg
Brasky:

Is there a size limit where it can't physically get any smaller?
There will be a point (probably under 1nm) where there will be no benefit of reducing even more, as quantum tunneling effects counteract any gains from the reduced structure size. Even if it is possible to reduce transistor size even more, it will not be faster and consume less. My guess is that the next evolution in performance will be to move into 3D (like the stacks of cells in an SSD), but for logic transistors, if a way to properly cool the inner layers is found.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
There was time, not so long ago, when 5 nm was considered as impossible target. And now they have 3 nm and talk about 2 nm. As @Hilbert Hagedoorn wrote, at this time, it is being counted in atoms. Freaking atoms. Some day in far future, we may have CPU's built atom by atom like from 3D printer. And maybe in few thousands of years even atoms will be manufactured through different energy fields.
wavetrex:

There will be a point (probably under 1nm) where there will be no benefit of reducing even more, as quantum tunneling effects counteract any gains from the reduced structure size. Even if it is possible to reduce transistor size even more, it will not be faster and consume less. My guess is that the next evolution in performance will be to move into 3D (like the stacks of cells in an SSD), but for logic transistors, if a way to properly cool the inner layers is found.
Low enough operational voltage and clock will take care of it.
https://forums.guru3d.com/data/avatars/m/263/263507.jpg
Last year I got a 7nm 3700X CPU and also started got a smartphone with 7 nm SoC (although Google adds so many services into Android that I rarely see a real benefit comparing vs older Nexus phones with much more density SoC and the same battery capacity/mAh). I can't wait to have hardware with 2 nm CPUs
https://forums.guru3d.com/data/avatars/m/232/232130.jpg
Brasky:

Is there a size limit where it can't physically get any smaller?
That's what she said.
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
funfact: 2nm won't technically be 2nm.
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
I'd really like to see optical computing start to take more concrete steps. Maybe when the traditional silicon tech reaches the nm endpoint, it will happen. Probably first in hybrid solutions. Since PCIe 4.0 already gives troubles to the developers, I really imagine replacing the electric PCIe with optical communication would be satisfying. But then again, I'm not an engineer.
data/avatar/default/avatar12.webp
Astyanax:

funfact: 2nm won't technically be 2nm.
neither was 7nm or 14nm or 22nm Definition of process size is about as much technical, as is marketing. IOTW: It's fine to be anal. Just don't think you're special if you're living in ancient Greece 😀
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
:D Graphene and carbon nanotubes ahoy
https://forums.guru3d.com/data/avatars/m/250/250418.jpg
Every company calls their products what they wish to call them, for marketing reasons. Intel advanced 14nm is much more dense over TSMC 12nm and even slightly more denser than GF 12nm. Source: https://en.wikipedia.org/wiki/14_nm_process Again, we could compare TSMC 7nm to Intel 10nm, very similar (minus the fact there are no Intel products using 10nm at all). Source: https://en.wikipedia.org/wiki/7_nm_process That means if TSMC calls something 2nm, doesn't mean it's actually 2nm. What really matters is how to contain the flow of electrons. We need clear 1s and 0s and we can't have leaks or we get errors. Smaller means less voltage and probably a point at witch we get less frequency (maybe that's why Intel 10nm failed). I think we're already hitting an economic wall at 7nm and they're hammering it with loots of science money. The limit I don't know where it is, but economics will play a big part. As for what would we do next, I think 3D is one option. Heat dissipation could be an issue, but if we put the low power components at the bottom and the high power ones on top, we could get away with a first generation. Also, lowering voltages across the board in favour of having a denser chip, or reinvent a better cooling solution to keep it cooler.
https://forums.guru3d.com/data/avatars/m/277/277333.jpg
Hilbert Hagedoorn:

Your girlfriend would disagree, but yeah, smaller is better
Kudos for the unexpected joke, you got me there LOL 😛
data/avatar/default/avatar13.webp
what i curious more is about transistor aging last year i read : https://semiengineering.com/transistor-aging-intensifies-10nm/ https://semiengineering.com/transistor-options-beyond-3nm/ so far there not much report regarding it, other than report from people that OC their CPU and get degraded quite fast, that i know many of those people did OC above unsafe voltage range, so not really mean that smaller nm = faster degradation either but i believe there are some trade for more efficient chip in someways, well i suppose they design the chip to work at least within warranty period, so around 5years? before seeing some degradation
data/avatar/default/avatar23.webp
So back in the day, about 20 years ago, i read a leaked document about microchip development and the planned introduction into human beings. The report stated that the goal was to achieve 2nm manufacturing node, and then at that time they could start integrating them into people.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Silva:

Every company calls their products what they wish to call them, for marketing reasons. Intel advanced 14nm is much more dense over TSMC 12nm and even slightly more denser than GF 12nm. Source: https://en.wikipedia.org/wiki/14_nm_process Again, we could compare TSMC 7nm to Intel 10nm, very similar (minus the fact there are no Intel products using 10nm at all). Source: https://en.wikipedia.org/wiki/7_nm_process That means if TSMC calls something 2nm, doesn't mean it's actually 2nm.
I think most of us are aware of this, however, I do think people put a little too much emphasis on transistor size and performance+efficiency. Performance improvements were noticeable back in the days of shaving off 15nm^2 per transistor. Now, we're talking 2 or 3. That isn't going to make a big difference to the consumer. The reason manufacturers are pushing for it is because of being able to fit more product on a single wafer.
As for what would we do next, I think 3D is one option. Heat dissipation could be an issue, but if we put the low power components at the bottom and the high power ones on top, we could get away with a first generation. Also, lowering voltages across the board in favour of having a denser chip, or reinvent a better cooling solution to keep it cooler.
I agree with this. Stacking appears to be the only sensible choice for the future, at least for GPUs. Heat won't necessarily be a problem if voltages are lowered. Think of it like this: imagine having the amount of transistors found in something like a RTX 2080Ti, then double it. Thermals do not rise linearly with voltage or clock speed. If you slow down each transistor (which you kinda need to do for these tiny nodes anyway) the bottom layer might be able to just barely run cool enough to offer some insane performance. I doubt we can achieve a triple-layer stack without serious thermal issues, though I'd love to be proven wrong.
https://forums.guru3d.com/data/avatars/m/189/189827.jpg
What happened to the 10nm limit of silicon?
https://forums.guru3d.com/data/avatars/m/197/197287.jpg
ToxicTaZ:

Who ever said 10nm was the limit? Well you're right in one way!.... Intel 10nm++ as in next year's 16/32 cores big.Little architecture "Alder Lake" is Intel last use of Silicon. Intel is moving to Graphene with their 7nm + ++ EUV from their new Fab42 factory. Graphene will allow Intel to move forward quickly from 7nm, 5nm, 3nm, 2nm, 1.4nm by 2030! The first CPU line out of Fab42 factory is Intel "Meteor Lake" with Ocean Cove cores design with over (80% IPC Gain over Intel's 10th generation) made by Father Ryzen himself Jim Keller. 10th gen Q2 2020 Comet Lake 14nm++ Sky Lake 11th gen Q1 2021 Rocket Lake 14nm++ Willow Cove (25% IPC Gain) 12th gen Q4 2021 Alder Lake 10nm++ Golden Cove (50% IPC Gain) 13th gen Q4 2022 Meteor Lake 7nm+ Ocean Cove (80%+ IPC Gain) I'm betting Intel 7nm is almost on par with TSMC 2nm. TSMC nm number schemes are out to lunch.
Can you stop pasting nonsense?
data/avatar/default/avatar35.webp
I think we will reach the limits of a sillicon, rather than limits of physics. I think, that around 1nm - 0.8nm we will have to switch from sillicon to other materials, i´ve already read about something about graphene and other complex substances, which are too expensive. I think manufacturers are trying to squeeze maximum from the sillicon just because the prices are "low" ... nobody would pay 5000€ for mid-range desktop processor based on let´s say graphene. People are mentioning quantum computing, but i think that´s rather 15-20 years into the future (when we will be able to build quantum computer, that can do general computing not pre-defined specific things)
https://forums.guru3d.com/data/avatars/m/138/138684.jpg
It's wafer thin!!!
https://forums.guru3d.com/data/avatars/m/270/270008.jpg
ToxicTaZ:

Who ever said 10nm was the limit? Well you're right in one way!.... Intel 10nm++ as in next year's 16/32 cores big.Little architecture "Alder Lake" is Intel last use of Silicon. Intel is moving to Graphene with their 7nm + ++ EUV from their new Fab42 factory. Graphene will allow Intel to move forward quickly from 7nm, 5nm, 3nm, 2nm, 1.4nm by 2030! The first CPU line out of Fab42 factory is Intel "Meteor Lake" with Ocean Cove cores design with over (80% IPC Gain over Intel's 10th generation) made by Father Ryzen himself Jim Keller. 10th gen Q2 2020 Comet Lake 14nm++ Sky Lake 11th gen Q1 2021 Rocket Lake 14nm++ Willow Cove (25% IPC Gain) 12th gen Q4 2021 Alder Lake 10nm++ Golden Cove (50% IPC Gain) 13th gen Q4 2022 Meteor Lake 7nm+ Ocean Cove (80%+ IPC Gain) I'm betting Intel 7nm is almost on par with TSMC 2nm. TSMC nm number schemes are out to lunch.
LOL Good one made me laugh today, thanks. Yes of course in 51 years of making x86 CPU's Intel is going to get 80% IPC gains in the next two years. Love the optimism.