140 billion transistors for Nvidia Hopper GH100 GPU?
GH100, NVIDIA's next-generation data center "Hopper" GPU, according to recent rumors from Chip Hell, will have some truly mind-blowing specs, according to the publication.
As claimed by user zhangzhonghao, the transistor count of this GPU will be 140 billion, an astounding number that exceeds current flagship data center GPUs such as AMD's Aldebaran (58.2 billion transistors) and NVIDIA's GA100 (58.2 billion transistors), respectively (54.2 billion transistors). In prior reports, it was stated that NVIDIA's GH100 would be manufactured utilizing 5-nanometer technology and would have a die size of close to 1000 mm2, making it the world's largest graphics processing unit.
Furthermore, according to reports, Nvidia has developed what is called the COPA solution, based on the Hopper architecture. A COPA-GPU is a domain-specialized composable GPU architecture capable to provide high levels of GPU design reuse across the #HPC and #DeepLearning domains, while enabling specifically optimized products for each domain. With two separate designs based on the same architecture, Nvidia will supply two distinct solutions: one for high-performance computing (HPC) and another for deep learning (DL). While HPC will use the standard technique, DL will make use of a vast independent cache that will be coupled to the graphics processor.
Nvidia is expected to unveil the Hopper architecture GPU at GTC 2022, where it will be detailed.
Senior Member
Posts: 1665
Joined: 2017-02-14
Maybe if its multiple dies even if they are directly connected on silicon like Cerebras. Not like Nvidia makes its own chips so we would all know if TSMC or Samsung figured out how to increase the reticle limit(they didn't).
Senior Member
Posts: 3601
Joined: 2007-05-31
Do you mean the minning one?
More seriously, the compute/IA line despite being based on gaming GPU have already taken this path, so it would be logical to go even more this way.
Senior Member
Posts: 7379
Joined: 2020-08-03
same as GV100 to GA100,that was 2.56x too
same two years between them too.
another 60% performance increase for +100W power ? seems likely.
I'm not complaining,will settle for a "4070" ~3080/Ti equivalent w. 250W TDP and I'll be happy if it's gonna be available at 650eur.I could really use that +50% performance for rt.
Senior Member
Posts: 1970
Joined: 2013-06-04
3090 is 628 mm²
Who the f will afford a 1000mm² GPU? (Don't answer miners, I mean real people)
Just stop making larger and larger chips only 1% of the world can buy!
Senior Member
Posts: 14039
Joined: 2004-05-16
In datacenter it's all about performance density, so this doesn't surprise me. Really interested to see a deep dive into the architecture though - I think we typically get those around March historically so hopefully that's the same this year. Seems like Nvidia's datacenter parts are going to be much different than gaming ones from here on out.