TSMC would use "gate all around transistors" for 2nm node
Yes Sir we're already talking about 2nm folks. DigiTimes has an interesting piece up claiming that the 2nm node would introduce something called "gate all around transistors" replacing FinFET with GAA transistors.
TSMC 2nm GAA process development ahead of schedule: TSMC's development of 2nm process technology, which is already out of its pathfinding mode, is ahead of schedule, according to industry sources. TSMC would switch from the 2N node from the currently common finfet transistors to so-called gaafet transistors, of which nanowires are a alternative. Ultimately, the continued use of finfet transistors will pose unconquerable difficulties. With the current finfet transistors, further reduction produces too much leakage of current. The gate encloses the channel of the finfet transistor on three sides.
Thorough enclosure of the channel, as is the case here, solves leakage current complications. The disadvantage of such transistors is that they are far more complex to produce though. Meanwhile, the 3N process should be well underway by the end of 2022.
Senior Member
Posts: 1665
Joined: 2017-02-14
You will see AMD pull up the APU roadmap in 2021

Senior Member
Posts: 252
Joined: 2020-06-12
Schoolboy question here, and I'm going to assume any problems with what I'm suggesting is going to be down to latency/bandwith:
Is there any potential future where AMD or Intel (or even Nvidia if they start making CPU's) can improve gaming performance through an architecture that takes advantage of an APU/dGPU.
So some part of the game rendering is taken care of by hardware that is fundamentally more effective/efficient being placed on the gpu portion of the APU rather than dGPU?
Senior Member
Posts: 14043
Joined: 2004-05-16
Schoolboy question here, and I'm going to assume any problems with what I'm suggesting is going to be down to latency/bandwith:
Is there any potential future where AMD or Intel (or even Nvidia if they start making CPU's) can improve gaming performance through an architecture that takes advantage of an APU/dGPU.
So some part of the game rendering is taken care of by hardware that is fundamentally more effective/efficient being placed on the gpu portion of the APU rather than dGPU?
It's exactly what you said - latency and bandwidth.
What you're asking is basically the direction chips are going though. I highly suggest you read this whitepaper:
https://research.nvidia.com/sites/default/files/publications/ISCA-2017-MCMGPU.pdf
Similar to how AMD started doing multi chip modules with their CPUs, eventually GPUs will be the same (I actually predict HPC GPUs of next generation will be semi-MCM), which isn't too far off what you're asking. If you follow the whitepaper, GPUs are way more sensitive to latency - where even if you have 3-4x the current bandwidth of interconnects like infinity fabric, it's not enough to overcome the penalties from MCM latency (nanoseconds), let alone APU/DGPU latency (microseconds). There may be some workloads worth sharing, but there is a ton of engineering, specifically software scheduling, required to do that correctly for potentially not much of an performance increase. It would also probably have to be tuned for every generation of APU/dGPU which just complicates it further.
Senior Member
Posts: 1665
Joined: 2017-02-14
Schoolboy question here, and I'm going to assume any problems with what I'm suggesting is going to be down to latency/bandwith:
Is there any potential future where AMD or Intel (or even Nvidia if they start making CPU's) can improve gaming performance through an architecture that takes advantage of an APU/dGPU.
So some part of the game rendering is taken care of by hardware that is fundamentally more effective/efficient being placed on the gpu portion of the APU rather than dGPU?
Yes. You already see the initial designs from AMD in the HPC space but its large and costs to much. It is very likely as HBM gets cheaper and made in higher volumes you will see APU's with a CPU, GPU, and say shared 64GB shared HBM memory. This is why HBM was invented in the first place. It was for when companies move to chiplet like approaches using 3d stacking.
Senior Member
Posts: 3366
Joined: 2013-03-10
Seeing how AMD hasn't been in any hurry to even bother to upgrade the APUs from the ancient GCN architecture, I don't think you are looking at it realistically. Even more so with Intel now trying its muscles in the discrete video card market. It's simply much more profit for a company to sell a CPU and a GPU separately than both in the same packet. Furthermore, people like to upgrade the video card more often than the CPU+mobo, so it's a big business on its own, for both the GPU chip manufacturer and the video card manufacturer partners. As far as games go, CPUs are not the limit currently. GPUs, however, are. That reveals how difficult it is to make powerful enough GPUs, even if you make the chip gargantuan and allow it to guzzle electricity like there's no tomorrow. So, no iGPU is going to be enough any time soon.