TSMC: Moore's Law not slowing down - talks 7nm and 10nm

Published by

Click here to post a comment for TSMC: Moore's Law not slowing down - talks 7nm and 10nm on our message forum
https://forums.guru3d.com/data/avatars/m/227/227853.jpg
I don't understand one thing. If they're saying that 16nm and 10nm are okay, why haven't these manufacturing processes been adopted by the likes of Nvidia and AMD? I'm excluding Intel because they have their own foundries.
https://forums.guru3d.com/data/avatars/m/223/223176.jpg
I don't understand one thing. If they're saying that 16nm and 10nm are okay, why haven't these manufacturing processes been adopted by the likes of Nvidia and AMD? I'm excluding Intel because they have their own foundries.
They're more or less still at R&D/trial stage, just because they are doing OK with production testing doesn't mean that they're ready for mass wafer production. There's also some tooling involved and process stages like Low-Power, High-Performance and High-Performance-Low-power. With each step they get better yields and less voltage leaks which is important when it comes to large wafer sizes in GPU/CPU silicone, as these chips run at a much higher clock speeds than say phone chips. First 16nm/10nm parts will probably be made for smart phone device then the rest should follow.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
I don't understand one thing. If they're saying that 16nm and 10nm are okay, why haven't these manufacturing processes been adopted by the likes of Nvidia and AMD? I'm excluding Intel because they have their own foundries.
A GPU switches a massive amount of transistors in parallel compared to a CPU. Current Finfet designs aren't ready for that kind of switching, they are essentially optimized for smaller chips that switch more frequently but switch less transistors at a time. 16nm FF+ at GF will probably be the first Finfet process optimized for GPU manufacturing. It isn't expected to hit full production till July/August though, so first products won't be out till probably Q1 2016. I also predict that similarly to what you saw with Intel's Trigate (Sandy -> Ivy), overclocking will be severely impacted by the switch to Finfet. The manufacturing process for finfet designs are optimized for very specific loads.
https://forums.guru3d.com/data/avatars/m/248/248627.jpg
^ intel also made a huge mistake by cheaping out on the thermal paste between the heat spreader gpus for the most part dont use a heat spreader so it shouldnt be as much of an issue though youre right overclocking will most likely be affected.
https://forums.guru3d.com/data/avatars/m/227/227853.jpg
They're more or less still at R&D/trial stage, just because they are doing OK with production testing doesn't mean that they're ready for mass wafer production. There's also some tooling involved and process stages like Low-Power, High-Performance and High-Performance-Low-power. With each step they get better yields and less voltage leaks which is important when it comes to large wafer sizes in GPU/CPU silicone, as these chips run at a much higher clock speeds than say phone chips. First 16nm/10nm parts will probably be made for smart phone device then the rest should follow.
A GPU switches a massive amount of transistors in parallel compared to a CPU. Current Finfet designs aren't ready for that kind of switching, they are essentially optimized for smaller chips that switch more frequently but switch less transistors at a time. 16nm FF+ at GF will probably be the first Finfet process optimized for GPU manufacturing. It isn't expected to hit full production till July/August though, so first products won't be out till probably Q1 2016. I also predict that similarly to what you saw with Intel's Trigate (Sandy -> Ivy), overclocking will be severely impacted by the switch to Finfet. The manufacturing process for finfet designs are optimized for very specific loads.
Hmm, I understand. Both arguments make sense. Thanks for the insight.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
A GPU switches a massive amount of transistors in parallel compared to a CPU. Current Finfet designs aren't ready for that kind of switching, they are essentially optimized for smaller chips that switch more frequently but switch less transistors at a time. 16nm FF+ at GF will probably be the first Finfet process optimized for GPU manufacturing. It isn't expected to hit full production till July/August though, so first products won't be out till probably Q1 2016. I also predict that similarly to what you saw with Intel's Trigate (Sandy -> Ivy), overclocking will be severely impacted by the switch to Finfet. The manufacturing process for finfet designs are optimized for very specific loads.
Did you mean 16nm FF+ at TSMC or 14nm FF at GloFo? My bet is on GloFo.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Did you mean 16nm FF+ at TSMC or 14nm FF at GloFo? My bet is on GloFo.
Yeah I meant TSMC. The 14nm FF at Global Foundries is phone SoC only atm, same with Samsung's (as it's their design). http://www.pcper.com/reviews/Editorial/28-nm-GPUs-Extended-Through-2015-20-nm-Planar-Bust The podcast went more into detail about 14nm FF (20nm interconnect) it will eventually be good for GPU's but not until next year, same with 16nm FF+. 16nm FF+ does hit full production this year though so I think they will be first for a bit. After that I'm kind of curious as to what's going to happen, 12/10nm might be achievable by these companies but I think Intel is going to end up switching to a completely new design/material by 7nm or after it. Their 12nm process is what, 6-8 months late? What do you do when literally the best fab engineers in the world are working on your problem and they still can't solve it? It's going to be interesting.
https://forums.guru3d.com/data/avatars/m/239/239175.jpg
"Moore's Law not slowing down" Really? Show me the CPUs that doubled the processing speed every 18 months for the last 5 years. I'm waiting. If you write such claim on an article, you need something to back it up. Like facts, for example.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
"Moore's Law not slowing down" Really? Show me the CPUs that doubled the processing speed every 18 months for the last 5 years. I'm waiting. If you write such claim on an article, you need something to back it up. Like facts, for example.
If you're going to post something like this, you should probably look up what Moore's law actually is. (Note that it's also more of an observation then a law) "The number of transistors per square inch on integrated circuits had doubled every year since the integrated circuit was invented." What you're talking about is what David House said. He said a chip's performance would double every 18 months. Regardless both of these were observations. And depending on what you consider "performance" it probably still comes close to holding true. iGPU performance has definitely doubled fairly regularly. Performance/W has also doubled. Who is to say that David House wasn't referring to these things? Plus I think it was pretty obvious that with quantum effects at the nanoscale we would start running into fabrication issues. Intel quite literally has the best engineers on the planet working at the problem and even they are having trouble figuring it out. I don't think someone like Moore could have possibly predicted this. Additionally, Hilbert didn't write the claim, nor did the source article found here: http://community.cadence.com/cadence_blogs_8/b/fullerview/archive/2015/04/02/moore-s-law-not-slowing-down-tsmc-executive It's something that the CEO said in an interview. He doesn't have to back anything up because it's an interview and he's literally saying they plan on doubling transistor density all the way down to 7nm.
https://forums.guru3d.com/data/avatars/m/230/230424.jpg
If it is, then great. Engineers will do what they do best and find a solution and most probably one that will destroy the previous tech in terms of performance. Tech needs to hit a brick wall every now and then, its what forces improvements to be made and created.