Nvidia might be moving to Multi-Chip-Module GPU design
Click here to post a comment for Nvidia might be moving to Multi-Chip-Module GPU design on our message forum
Spets
Thanks for the link, it's a great read. Looking forward to seeing what they achieve with their first commercial Multi-Chip-Module GPU.
volkov956
both companies need to mask the amount of gpus from the OS driver level so the system only sees 1 and the onboard bios of the gpu decides out how the gpu dishes out the utilization otherwise we will be stuck waiting and hoping the developers figure it out
Same goes for CPU i really want to find the documents on this it was discussed way back in mid 2000's how its possible but no one wants to do it..
and from what I can find it has been done back in the earlier days aka voodoo and someother company forget which one where os and drivers only seen it as 1
LesserHellspawn
Hello Voodoo 5 ?
drac
Good stuff, no more masses of video cards crammed into cases overheating all over the place.
This could be soo good, SLI in the one card done properly, but I guess it hasn't happend due to the current tech limitations.
Evildead666
yay for a Voodoo 5 🙂
This is understandable, especially what with the Fab Processes being late, and farther and farther apart.
If you can't rely on reducing the size of the chips/transistors, the you have to go MCM, or Multi-chip-on-board a la Voodoo, for the High end cards.
even if its only for the datacenter GPU cards at first (they can pay for the R&D with the prices the cards sell for) and then the consumer will get a trickle down effect later on.
I suspect it hasn't been done lately, because they didn't plan to, officially.
It also means dedicating on-die space to something that won't be used in Single chip boards, which would be wasted.
I could see a medium sized GPU with a 128bit GDDR/X bus, or a Single HBM Stack (either 512bit or 1024bit wide) and adding up to four of them together on a single card.
The HBCC might be a good fit for this, since there would be one per chip, and one of them could become 'master' to the others, and give them orders.
NVLink is probably set up for this as well.
Exascale
https://www.nextplatform.com/2015/08/03/future-systems-intel-ponders-breaking-up-the-cpu/
This is the next step in 2.5D architectures. Nvidias approach discusses how to solve data locality issues and reduce the pJ/bit cost of moving data with their L1.5 cache. I need to read up on Infinity Fabric and HBCC to see if it has any similar provisions. If it doesnt now, it certainly will need them for large scale systems with hundreds of thousands or millions of cores.
This isnt a replacement for SLI and not even something for consumer GPUs for a long time(when the performance requirements dictate that a monolithic die is too expensive). That should be a ways off still, and considering that Nvidias monolithic V100 sells for $13,000 dont expect these to be cheap. It may reduce the cost of the individual dies and make binning easier, but the addition of all the interconnects and SRAM for the L1.5 cache will still make these expensive.
Its a small NUMA setup for a GPU that uses L1.5 cache to get around some of the issues involved with making NUMA architectures.
This GPM(graphics processing module) approach is destined to be used i n Nvidias exascale architecture, and the Volta V100 successor chip will likely be such an MCM.
Intel discussed a similar idea a year or two ago regarding the Knights Hill architecture, which follows the 72 core Knights Hill HPC focused x86 CPU.
nevcairiel
Exascale
Craigpd
OK, so nVidia published a paper out lining the theory and application behind the use of MCM in a GPU.
However, having an interconnect that can supply enough bandwidth without large latency hits is a different matter. AMD got very lucky with IF, but will Intel and nVidia be able to replicate results without hitting on AMD's patents related to IF? If they can't, their only option could be to lease the technology from AMD, assuming AMD are game for giving up their ace up their sleeve.
yasamoka
nevcairiel
Venix
Evildead666
drac
Denial
Exascale
DeskStar
Pretty interesting.... Couldn't resist on the 3DFX Voodoo 5 5500 AGP picture as I had that card years ago!!! I still remember my ATI x1800PE dying on me and having to slap in that good'ol 3DFX Voodoo 5 5500 AGP just to have a display adapter.
Damn thing ran Halflife 2 at 800p resolution most setting at max...
Just as long as the tech doesn't take a long time to come around I'm on board....
MorganX
JamesSneed
I assume Nvidia thinks AMD's Navi will be a bit hit since they are also moving the same direction. To me Vega is pretty boring but Navi using the IF along with a die shrink looks pretty interesting.
PrMinisterGR
It's a physics question, not just a business one.