NVIDIA Next Gen-GPU Hopper could be offered in chiplet design

Published by

Click here to post a comment for NVIDIA Next Gen-GPU Hopper could be offered in chiplet design on our message forum
data/avatar/default/avatar33.webp
So I guess AMD are onto something 😉
https://forums.guru3d.com/data/avatars/m/189/189980.jpg
royalties from using chiplet design could give AMD some laughter all the way to the bank... just saying
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
amd didn't patent chiplets. They patented how THEY did chiplets, but theirs is not the only possible way
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
anticupidon:

royalties from using chiplet design could give AMD some laughter all the way to the bank... just saying
i do not think that's something that can be put in a patent you have multiple chips for a specific job ... would be like ...say Ford patenting the simultaneous use of more than 1 wheel ! so if you want to make something more than a unicycle ..... pay royalties !
https://forums.guru3d.com/data/avatars/m/189/189980.jpg
Was a crossbreed between sarcasm, joke and wishful thinking
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
Maybe they do have some IP in terms of techs that reduce latency etc., but I am not an electrical engineer. I wouldn't understand the details even if I had the papers in hand. That said, I'm sure Nvidia has enough R&D money to come up with some way to make it work. I don't see this arriving too soon, and not on consumer GPUs first, so it will probably be some years until we can put that stuff from the green brand into our rigs.
https://forums.guru3d.com/data/avatars/m/175/175902.jpg
What we call "chiplet" have been used by everyone since many decade, it's not from AMD at all they have just patended how they link their own chip, and exept them no one use this way (from my own thinking it is not the best way, mobile chip maker do if far more effective and economic). It's a nice move for having CPU that better fit consumer's need, i hope to see both GPU brand result soon.
data/avatar/default/avatar29.webp
Mesab67:

So I guess AMD are onto something 😉
NVIDIA has been producing white-papers on chiplets for GPUs for years, but as the original article mentions, for GPUs you have a lot of problems with scaling, otherwise they would've done it long before AMD even launched Zen. If they managed to solve that, that is a big step forwards.
https://forums.guru3d.com/data/avatars/m/175/175902.jpg
fantaskarsef:

Maybe they do have some IP in terms of techs that reduce latency etc., but I am not an electrical engineer. I wouldn't understand the details even if I had the papers in hand. That said, I'm sure Nvidia has enough R&D money to come up with some way to make it work. I don't see this arriving too soon, and not on consumer GPUs first, so it will probably be some years until we can put that stuff from the green brand into our rigs.
The technique will improve the versatility of the fabric, on the same fabric process you could made high end, main, and low end GPU as they all have the same basis, a bit like with same PCB you can have Gt 1030, Quadro and RTX 2080 Ti so production will be easy. Also this technique permit to "integrate" CPU and other fonction if it is compatible with the process (used in mobile and car as an exemple). Those are the pro arguments... (of course there is con arguments too 😉 )
https://forums.guru3d.com/data/avatars/m/175/175902.jpg
nevcairiel:

NVIDIA has been producing white-papers on chiplets for GPUs for years, but as the original article mentions, for GPUs you have a lot of problems with scaling, otherwise they would've done it long before AMD even launched Zen. If they managed to solve that, that is a big step forwards.
For CPU the 1st to have done it was Intel, but it haven't be a commercial success, the technology wasn't as precise as actual days.
https://forums.guru3d.com/data/avatars/m/181/181063.jpg
Mesab67:

So I guess AMD are onto something 😉
Yes, on something that Intel and Nvidia already tried before;). In an interview Jonah Alben, svp of Nvidia’s GPU engineering said:
where is the crossover point for GPU chiplets to actually become worthwhile? To which Alben replied, “We haven’t hit it yet...”
The interview is here: https://semiengineering.com/the-future-of-gpus and it is very interesting - it gives some facts from the people involved and not rumors.
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
fantaskarsef:

Maybe they do have some IP in terms of techs that reduce latency etc., but I am not an electrical engineer. I wouldn't understand the details even if I had the papers in hand. That said, I'm sure Nvidia has enough R&D money to come up with some way to make it work. I don't see this arriving too soon, and not on consumer GPUs first, so it will probably be some years until we can put that stuff from the green brand into our rigs.
nvidia has been doing MCM communications long before AMD started,.
https://forums.guru3d.com/data/avatars/m/181/181063.jpg
Also, on a funny note Nvidia’s CEO, Jen-Hsun Huang - the leather jacket king - claims that Nvidia has just “created a brand-new game platform: notebook PC gaming.” So, nvidia just invented notebook pc gaming - these are historical times!
data/avatar/default/avatar30.webp
if Intel could follow and stop making tiny chips having 200w to dissipate that would be great too my 9900k goes from 90°C to 37°C in like 3s with an AIO when I stop benchmarking so clearly it's not that the loop doesn't work, just that the surface is too small to dissipate that heat properly
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
Astyanax:

nvidia has been doing MCM communications long before AMD started,.
Yeah I was reminded by fellow gurus and posts of that. Also, they put in another 100 million $ in R&D as I've heard lately. Like I said, they have the money to do this, I don't doubt it. I guess it's more a question of what advantages it brings than if they could do that.
https://forums.guru3d.com/data/avatars/m/79/79740.jpg
Wonder what the power and cooling requirements will be for MCM designs. It may not be an issue for HPC and data centers, where it can save on space or efficiency, but for gaming cards, that may be something else.
data/avatar/default/avatar34.webp
alanm:

Wonder what the power and cooling requirements will be for MCM designs. It may not be an issue for HPC and data centers, where it can save on space or efficiency, but for gaming cards, that may be something else.
Like amd's chiplets, it should actually reduce the cooling requirements, as the heat is spread out over a larger area, making it easier to cool.
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
Dragam1337:

Like amd's chiplets, it should actually reduce the cooling requirements, as the heat is spread out over a larger area, making it easier to cool.
Or... you want 2x XX80TI on one card to skip two card SLI... like me 😀 But you are right, of course, if they take the route with still the same TDP applied.
data/avatar/default/avatar36.webp
fantaskarsef:

Or... you want 2x XX80TI on one card to skip two card SLI... like me 😀 But you are right, of course, if they take the route with still the same TDP applied.
That is certainly my hope, that we will see 4 highend gpu's interconnected on one die - would allow us to finally get worthwhile gpu performance gains again 🙂
https://forums.guru3d.com/data/avatars/m/229/229509.jpg
Mesab67:

So I guess AMD are onto something 😉
"It's just a load of GPUs bolted together, it'll never work..." xD