First AMD Radeon MCM GPU later this year as Instinct MI200 (multi-chip module)

Published by

Click here to post a comment for First AMD Radeon MCM GPU later this year as Instinct MI200 (multi-chip module) on our message forum
https://forums.guru3d.com/data/avatars/m/260/260103.jpg
It's coming. Now we just need a gaming version.
https://forums.guru3d.com/data/avatars/m/250/250418.jpg
AMD keep being the first one and making tech exiting again, can't wait for next year RDNA3 to bring MCM to consumers! This might help alleviate prices across the range, we could use for lower prices as +1000€ mid range GPUs is asking too much when a console costs 300-400€.
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
I too am curious to see how this performs under gaming tasks. I think the first generation won't be much to game on, but they'll eventually figure out the bugs in the second iteration. I hope somebody can hack that driver and see what's happening under gaming benches.
https://forums.guru3d.com/data/avatars/m/248/248627.jpg
This should greatly improve yields across the board and prices hopefully idc how they do it but prices have to drop or gaming on pc is done for me at least.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Worst-case scenario, if MCM doesn't work well for desktop/gaming use, it is still valuable to servers. Servers don't care as much about sharing resources between processors. In a lot of servers, a bridge to each GPU isn't necessary since they're not typically working on the exact same workload, as they would in a game.
Silva:

This might help alleviate prices across the range, we could use for lower prices as +1000€ mid range GPUs is asking too much when a console costs 300-400€.
That's the part I'm most excited about. Not only is MCM cheaper because they don't have to make such valuable gigantic dies, but the total product per-wafer increases, which will help keep up with demand and therefore lower prices. This is a big deal. Depending how MCM is approached, part of me wonders if we might start seeing multi-GPU setups again, except you can mix and match whatever you want and not face a major performance penalty.
https://forums.guru3d.com/data/avatars/m/250/250418.jpg
schmidtbag:

Depending how MCM is approached, part of me wonders if we might start seeing multi-GPU setups again, except you can mix and match whatever you want and not face a major performance penalty.
I think that is dead: the basis of MCM is that each chip talks directly to each other, no bridge chip is used and the path is tiny rendering a small latency penalty. Adding another card will have the same problems of the past, so I believe both SLI and Crossfire are dead.
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
Silva:

I think that is dead: the basis of MCM is that each chip talks directly to each other, no bridge chip is used and the path is tiny rendering a small latency penalty. Adding another card will have the same problems of the past, so I believe both SLI and Crossfire are dead.
absolutely. at first it will come at familiar price points as the easiest to make and sell for gaming will be the basic "chiplet" on it's own card. that will answer any question of availability and you can have the familiar disabled or not shader count making for at least two versions at the lower end. for the enthusiast level they will use MCM, it will easily surpass Xfire/SLI without any of the problems including drivers. for the really high end you can have multiple chiplets and a large socket to radically dial-up whatever performance level you want Nvidia to struggle to hit for a few years.
https://forums.guru3d.com/data/avatars/m/196/196531.jpg
We need a new platform, motherboards with cards the way we got them now is too old school, Image a computer that its build like a car engine, in 3D, where you can integrate cooling systems right inside the hardware and add GPU or CPU modules like cylinder heads on a Gas Engine
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
Death_Lord:

We need a new platform, motherboards with cards the way we got them now is too old school, Image a computer that its build like a car engine, in 3D, where you can integrate cooling systems right inside the hardware and add GPU or CPU modules like cylinder heads on a Gas Engine
can be done except for I.P. plus the more complex a system the greater chance of breakdown. there is no way that this can be done legally because of patent laws and if someone licenses the patent great. now you have a console trapped like a fly in amber.
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
tunejunky:

absolutely. at first it will come at familiar price points as the easiest to make and sell for gaming will be the basic "chiplet" on it's own card. that will answer any question of availability and you can have the familiar disabled or not shader count making for at least two versions at the lower end. for the enthusiast level they will use MCM, it will easily surpass Xfire/SLI without any of the problems including drivers. for the really high end you can have multiple chiplets and a large socket to radically dial-up whatever performance level you want Nvidia to struggle to hit for a few years.
as far the "bridge chip " is the one pretending to be the gpu and schedules the 2 or more mcms to do rendering task with out the graphics engines being aware there are 2 gpus working on it ...as far it does a good job ... it will be the holly grail of multichip gpus ! I am still waiting to see results of it since a lot of things sound awesome in paper and on execution some times are underwhelming , that said this looks very promising !
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
Venix:

as far the "bridge chip " is the one pretending to be the gpu and schedules the 2 or more mcms to do rendering task with out the graphics engines being aware there are 2 gpus working on it ...as far it does a good job ... it will be the holly grail of multichip gpus ! I am still waiting to see results of it since a lot of things sound awesome in paper and on execution some times are underwhelming , that said this looks very promising !
five years ago i was lucky enough to be in the audience for an AMD industry press conference (Epyc/Instinct) and they were still working out the scheduling issues but were excited about gen 2 infinity fabric (IF) they said then that the compute performance was off the charts on their mcm prototypes for Instinct and the "last mile of the road" was the most difficult. they said then that gen 3 would see the MCM in the market and "shortly" after they would see the gaming market. well we all know that "shortly" at a road map conference can end up being long to us but i was as stoked as a teenaged boy after a sports victory. for all of the folks crazy for RT, they can actually have RT cores as each chiplet is small enough that the real estate on the physical chip would easily allow for it, which would then increase the overall performance w/o the need of spreading the task around like they do now (RX 6xxx), but that all depends on how they work out the scheduling. i wouldn't be surprised if gen 1 used a specific chiplet to control it, or rather i'd be surprised if they didn't.