Samsung Talks About Chip Fab Production Roadmap up to 3 nanometers

Published by

Click here to post a comment for Samsung Talks About Chip Fab Production Roadmap up to 3 nanometers on our message forum
https://forums.guru3d.com/data/avatars/m/196/196426.jpg
Seesh... Many of us are still running 32nm Sandy Bridge, 22nm Ivy/Haswell and the occasional 14nm stuff, and these guys are already discussing 3nm I wonder what kind of processing power will be possible with mature "3nm" tech. Trillions of transistors ?
data/avatar/default/avatar21.webp
wavetrex:

Seesh... Many of us are still running 32nm Sandy Bridge, 22nm Ivy/Haswell and the occasional 14nm stuff, and these guys are already discussing 3nm I wonder what kind of processing power will be possible with mature "3nm" tech. Trillions of transistors ?
Well Intel have been dropping nms every year almost, yet the performance increase has been almost unnoticeable so lithography is not the main importance, architecture is.
data/avatar/default/avatar01.webp
Seems too ambitious IMO, it is a shame tho that a company like samsung focuses mainly on mobile/enterprise...
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
FrostNixon:

Well Intel have been dropping nms every year almost, yet the performance increase has been almost unnoticeable so lithography is not the main importance, architecture is.
The main importance for Intel is the profit (actually it's for any company that plans to stay alive). The smaller the process technology, the more units they get out of a single wafer manufactured. Of equal importance is the improved energy efficiency, which is a decisive selling factor in many market sectors. So, yeah, while Intel had no interest in developing their architecture, they did try to develop the process technology.
https://forums.guru3d.com/data/avatars/m/227/227853.jpg
FrostNixon:

Well Intel have been dropping nms every year almost, yet the performance increase has been almost unnoticeable so lithography is not the main importance, architecture is.
Don't discount the importance of litography, Nvidia/AMD gained a ton of performance by switching to a smaller node just because it made higher frequencies possible. The architectural differences were minimal. It's worth mentioning that the node jump was pretty substantial however. On the other hand Intel have been leaders when it comes to litography yet the performance wasn't there because they were sitting on their asses collecting laurels instead of actually improving their CPUs. Ryzen was a big comeback for AMD, that is absolutely true. But if Intel had properly worked on their CPUs during these past years, Ryzen would've simply been a competitor. Instead Ryzen is stepping on Intel's face over and over again and I think it's going to get even more brutal next generation.
data/avatar/default/avatar24.webp
FrostNixon:

Well Intel have been dropping nms every year almost, yet the performance increase has been almost unnoticeable so lithography is not the main importance, architecture is.
process node is the backbone of any processor, while now the gains are not as massive as before. they are still the most significant improvement we can get you get not just more components in the same size but also efficiency and power delivery improvements which affect performance directly(ofc frequency should improve too)
https://forums.guru3d.com/data/avatars/m/260/260048.jpg
Im more interested what happens past 1nm. Will there be a Nano-Centimetre? Quantum computing? Or something radically different?
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
wavetrex:

Seesh... Many of us are still running 32nm Sandy Bridge, 22nm Ivy/Haswell and the occasional 14nm stuff, and these guys are already discussing 3nm I wonder what kind of processing power will be possible with mature "3nm" tech. Trillions of transistors ?
You have to have future plans. Otherwise your stock value goes to hell closer you are to end of your business plans. Imagine, you are leader of technology, and then you stop in place and let everyone to catch-up to you.
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
to me this is Samsung responding to TSMC, essentially saying we're bigger, badder, and all around better. even if its not true. and folks...you are ignoring the elephant in the room...Apple. Apple has been paying incentives for process shrinkage ever since they went A8. over $2 Billion to date, and they might ditch Intel sooner than thought for regular computing. both their own (future) designs and AMD's are testing faster at lower power SoC's. and if you haven't noticed, microprocessors are becoming more and more SoC's (esp. Ryzen based).
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
cryohellinc:

Im more interested what happens past 1nm. Will there be a Nano-Centimetre? Quantum computing? Or something radically different?
Nano-centimeter doesn't make sense. Picometer is the next step down. Quantum computers are a very different "species" of computers. They don't use traditional transistors or binary calculations, so their development doesn't really have much in common. I personally don't see quantum computers being available for home use in the foreseeable future, at least not as they're used now. They're ideal for science-based calculations with massive and complex numbers, but not a whole lot else. Much like a CPU vs a GPU, quantum computers are good at some things and worse at others.
tunejunky:

and if you haven't noticed, microprocessors are becoming more and more SoC's (esp. Ryzen based).
I would actually argue Ryzen is the least SoC-ish of mainstream processors, whereas ARM is the most. Almost your entire phone's capabilities are pinned down to 1 chip. Everything else is just power regulation, connectors, and sensors that need to be positioned elsewhere. Intel also has some SoCs that don't have an external chipset.
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
Well 3nm talks ...now not sure how hard they can push it.... after that ...they have to change the material ..to something more efficient than sand i guess the ultimate shrink is up to 1 atom thickness? Good luck going that thin though 😛
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Venix:

Well 3nm talks ...now not sure how hard they can push it.... after that ...they have to change the material ..to something more efficient than sand i guess the ultimate shrink is up to 1 atom thickness? Good luck going that thin though 😛
Single-atom transistors or switches are a real thing. The tricky part is figuring out how to make use of them, let alone on a mass-produced scale. Just because you can go smaller, that doesn't mean you'll benefit from it. This is why I think Intel has been taking so long with 10nm - I'm sure they had it working well over a year ago, but it resulted in lower clocks. The advantages of such a die shrink are outweighed by the cons of slowing down the CPU.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
Venix:

Well 3nm talks ...now not sure how hard they can push it.... after that ...they have to change the material ..to something more efficient than sand i guess the ultimate shrink is up to 1 atom thickness? Good luck going that thin though 😛
How do you want to shape transistor and insulate it, if it has to be sized as one atom, then you have just one atom to do all that. As for 3nm transistor, that's just ~11 silicon atoms of length. Luckily for technology, silicon is taking too much space in comparison to its weight. So most of other elements lighter or heavier can fit more atoms in same area.
https://forums.guru3d.com/data/avatars/m/240/240605.jpg
Still bigger than my penis. Nah i keed! But really they gonna need a new material soon. Exciting times!
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
Fox and shim fair points , what i wanted to say is that we really need to find new material , and then again how long would it take to reach nm limits? I believe we are aproaching an era that fabs would not be able to shrink anymore ...... except...if ant man help us ? 😛
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Venix:

Fox and shim fair points , what i wanted to say is that we really need to find new material , and then again how long would it take to reach nm limits? I believe we are aproaching an era that fabs would not be able to shrink anymore ...... except...if ant man help us ? 😛
Finding a new material is easier said than done. Keep in mind that it is no coincidence why silicon is used for transistors. It has all of the right properties to make for a good one: it's abundant and cheap, it's a semiconductor, it has relatively small atoms, it tetravalent (this is important), and the other elements it can be doped with have been well-researched at this point. So, take a look at all of the other potential candidates: * Tin - too expensive, too low of a melting point, and too conductive * Lead - biohazard, large atoms, and too conductive * Flerovium - Synthetic, and therefore utterly useless * Germanium - Can and has actually been used in transistors (in fact, it was used for the first ever transistor), but it's expensive and more picky about the manufacturing process (and in case you're not aware, silicon is pretty damn picky). It might be good for special-use cases, but not for mass production. * Looking beyond "group 14", there are potential candidates like gallium arsenide, but I get the impression those seem to only be suitable for proof-of-concepts rather than practical approaches. They definitely wouldn't help in terms of reducing transistor size (besides, gallium is relatively expensive and arsenic is a biohazard, so that doesn't help). So, that just leaves us with carbon. Carbon is being investigated for use with transistors, and it is thought to maybe be the successor to silicon. The problem with carbon is trying to figure out a cost-effective way to manufacture transistors for it, because otherwise the element itself is very cheap and abundant. Anyway, I don't think we really need to ditch silicon any time soon. I think one of the reasons so many companies are investing in AI lately is because they're trying to use AI to create new processor architectures. An AI could notice something humans may have never thought of before to get us a lot more efficiency and speed in our designs. Besides, look at a CPU architecture vs a GPU architecture with the same number of transistors - depending on the task, one will decimate the other. But, who says it has to be that way? It may be possible to create a design that obsoletes both CPUs and GPUs (as we know them). Such a design could have a lot of potential benefits, like having everything in shared memory (integrated GPUs still work relatively independently of the CPU), there would be a lot of time saved not needing to communicate over PCIe. Maybe such a CPU could be modular, where you could basically add more cores over PCIe if you really needed to.
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
...actually Ryzen is the fruition of decades of SoC design, as is "infinity fabric". the entire scalable design was done simply because SoC's were unwieldy and less efficient than thought. and were the (previous) largest single market for AMD. Ryzen facilitated much more advanced SoC's (see Xbox/Playstation) and industry specific SoC's, but the design of the CPU itself was driven by SoC designers.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
tunejunky:

...actually Ryzen is the fruition of decades of SoC design, as is "infinity fabric". the entire scalable design was done simply because SoC's were unwieldy and less efficient than thought. and were the (previous) largest single market for AMD. Ryzen facilitated much more advanced SoC's (see Xbox/Playstation) and industry specific SoC's, but the design of the CPU itself was driven by SoC designers.
Infinity Fabric doesn't make a product an SoC. IF is just a feature that makes SoCs much easier to make. So AMD's design has the potential to be the "most SoC-ish product ever made" but currently, AMD does not hold that title. What makes an SoC is how many components you integrate into a single chip, hence the name. A Threadripper CPU with a discrete chipset, a discrete GPU, discrete RAM, etc is barely an SoC, because you have so many core components in separate chips. Compare that to some ARM processors, where the CPU, GPU, USB controller, sensor controllers, storage controller, PCIe lanes, and even the RAM are all integrated into 1 unified package. That is what makes an SoC; it's literally the entire system on a chip.
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
Even if carbon is viable then there is a matter of production to my understanding the silicon fabs cost billions to make making a new fab to make carbon transiators it is not something samsung or glofo or tsmc or intel will invest if the resault is not vastly superior and they have to be sure it will work !
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
Venix:

Fox and shim fair points , what i wanted to say is that we really need to find new material , and then again how long would it take to reach nm limits? I believe we are aproaching an era that fabs would not be able to shrink anymore ...... except...if ant man help us ? 😛
Yes, there is limit. And then money poured into research of that new molecular combinations goes through the sky in comparison to what it is now. (As ll resources used here for shrinkage will be unused.) That new material will either be based on ability to work at lower voltage (smaller power consumption, leakage, ...) allowing to build multiple transistor layers above each other without causing thermal issues. Or they will deliver ability to clock at magnitude higher clock. Then 1~2 core chip (even if built on much higher node) outperforms today's 10~20 core chips. And Moore's Law resets as that new 1~2 core chip needs just fraction of transistors we have today.
Venix:

Even if carbon is viable then there is a matter of production to my understanding the silicon fabs cost billions to make making a new fab to make carbon transiators it is not something samsung or glofo or tsmc or intel will invest if the resault is not vastly superior and they have to be sure it will work !
They are still building new factories for new nodes or to expand production. Having to build factory for new technology in not different as long as they believe that concept is solid and chips will not deteriorate in 5~6 years from being sold.