Nvidia might be moving to Multi-Chip-Module GPU design
Click here to post a comment for Nvidia might be moving to Multi-Chip-Module GPU design on our message forum
user1
ivymike10mt
Han2K
They were right!
http://www.3dfx.ch/gallery/d/16281-1/3dfx+Voodoo+5+6000+AGP+128MB+Rev_A1+1500+Octa+fan+card+a.JPG
Denial
http://kotaku.com/crunch-time-why-game-developers-work-such-insane-hours-1704744577
That's not to mention that there are only handful of developers with the knowledge of being able to develop low level APIs, low level engine systems, network code that can scale across thousands of servers and clients, etc. They don't teach any of that stuff in game design school - they teach you like LUA scripting and some C++/Java. The real good work comes from people specialized in certain fields, for example network engineering, that happen to take interest in gaming.
So when Unreal developers, or Nvidia with Gameworks, or AMD with GPUOpen come in and build out a bunch of libraries for developers, it's extremely helpful. It shouldn't reflect on the developers that utilize it.
Honestly the series of videos that Star Citizen has been putting out lately provides excellent insight into what it takes to build and scale a game out over multiple studios. They show you how they have to build a production pipeline with a few extremely talented people before they think about hiring a mass of artists and designers for content check-in. Just scheduling and bringing new hires up to speed on the engine, scripting, design of races/ships/etc takes months.
I would argue that the level of production/talent/work in modern AAA games probably exceeds what most big budget movie studios are doing.
Game developers are not lazy. During crunch time on a game, they often work 6-7 days a week for 12+ hours a day for several months.
Exascale
Prince Valiant
Edit: On second thought, best to not get too far off track.
Crazy Serb
Denial
https://www.cadence.com/content/cadence-www/global/en_US/home/tools/system-design-and-verification.html
As for your second paragraph, it's also answered in the PDF that apparently no one is reading but feels the need to comment on:
So it's essentially a best case multi-GPU setup, an optimized version of said setup that they also simulated, and a simulated MCM design. They don't test games, so the issue of SLI scaling due to memory or previous frame data being required is not related.
I definitely agree that games need either longer development cycles or do what the Hellblade dev is doing and cut the content down so they don't need to sacrifice quality for it.
Nvidia has a Cadence Palladium system that allows them to design/prototype and validate virtual GPU's without having to build one. They can simulate performance with a high degree of accuracy across a number of different benchmarks. They've designed/prototyped every GPU since Kepler on Cadence EDA tools/hardware.
Exascale
Its kind of weird that they talk about the next generation board level links being 256GB/s when NVLink 2.0 is basically out and has 300GB/s link speed. Its also crazy how far ahead of everyone else Fujitsu is. In 2015 they started shipping SPARC XIfx systems with 250GB/s link speeds using optical links. I cant wait to see their next generation.
Fox2232
chronek
Multi-Chip-Module GPU will be cheaper to produce, i hope that will be easier to cool too
Exascale
robintson
They will just adopt the "CPU model" on the GPU's. One GPU with many cores and threads and it will function in a similar way like Intel's i9, or AMD Ryzen CPU for example. Sooner or later GPU manufacturers will be forced to go with "GPU Multi Core" and "GPU Multi threading" as well, no surprise here.
ivymike10mt
BlazeInterior
Thanks for the SHARING, it's a great ANSWER. Looking forward to seeing what they achieve with their first commercial Multi-Chip-Module GPU.LOL
Xendance
Exascale
TieSKey