TSMC would use "gate all around transistors" for 2nm node
Click here to post a comment for TSMC would use "gate all around transistors" for 2nm node on our message forum
Kaarme
They certainly look more complex, based on the illustration.
Herem
I wonder how many pluses Intel will need to compete with this?
nevcairiel
Intel is also developing GAAFET technology for use in whats presumed to be called their 5nm node (note: those sizes are marketing names at this point and not representative of any feature sizes on the chips)
The funny part is that the "source" link in the above article actually points to the anandtech article about Intels GAAFET plans, and not to anything related to TSMC. π
mrkuro
[youtube=1kQUXpZpLXI] comparing TSMC nm vs Intel nm or Samsung nm is pointless and has allways been. Itβs just pure marketing and speculation
JamesSneed
So around 2024 we should see Ryzen CPU's on TSMC's 2nm process. That is crazy. The transistor density is going to be off the charts. I have a feeling the APU is going to take over when we get down around this density. We are talking about an expected 3.5x density improvement over 7nm. The largest Navi at rumored 505mm2 would be about 144mm2 which would make a nice little chiplet to go along with a small 16 core CPU chiplet. Things are going to get weird.
Kaarme
JamesSneed
TheSissyOfFremont
Schoolboy question here, and I'm going to assume any problems with what I'm suggesting is going to be down to latency/bandwith:
Is there any potential future where AMD or Intel (or even Nvidia if they start making CPU's) can improve gaming performance through an architecture that takes advantage of an APU/dGPU.
So some part of the game rendering is taken care of by hardware that is fundamentally more effective/efficient being placed on the gpu portion of the APU rather than dGPU?
Denial
https://research.nvidia.com/sites/default/files/publications/ISCA_2017_MCMGPU.pdf
Similar to how AMD started doing multi chip modules with their CPUs, eventually GPUs will be the same (I actually predict HPC GPUs of next generation will be semi-MCM), which isn't too far off what you're asking. If you follow the whitepaper, GPUs are way more sensitive to latency - where even if you have 3-4x the current bandwidth of interconnects like infinity fabric, it's not enough to overcome the penalties from MCM latency (nanoseconds), let alone APU/DGPU latency (microseconds). There may be some workloads worth sharing, but there is a ton of engineering, specifically software scheduling, required to do that correctly for potentially not much of an performance increase. It would also probably have to be tuned for every generation of APU/dGPU which just complicates it further.
It's exactly what you said - latency and bandwidth.
What you're asking is basically the direction chips are going though. I highly suggest you read this whitepaper:
JamesSneed
Kaarme
JamesSneed