Samsung Develops Industry's First GDDR7 DRAM for Enhanced Graphics Performance

Published by

Click here to post a comment for Samsung Develops Industry's First GDDR7 DRAM for Enhanced Graphics Performance on our message forum
https://forums.guru3d.com/data/avatars/m/266/266074.jpg
Really curious how these will turn out to be, guess for the release of the upcoming gen of GPUs in 2024-2025. I may be going tunnel vision on the article, but the thing that stood out for me was them using epoxy molding compound for packaging: "These improvements result in a remarkable 70% reduction in thermal resistance compared to GDDR6." Not that memory temps ever stood out to have a temp issue in the first place but still, I am impressed by this aspect alone.
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
Nvidia and AMD would really appreciate more than 16Gb per chip, so that they could offer a larger memory capacity with fewer chips and an even narrower bus, while still keeping it single-sided. 16Gb chips take 8 whole units for 16GB of VRAM, making a whopping (haha, right) 256-bit bus necessary. Of course with 24Gb chips it would be kind of difficult to have 16GB, but you could have 15GB or 18GB, allowing a more pleasant (for the manufacturers, not for the customers) 160-bit bus for 15GB, or for 18GB the more familiar 192-bit bus, which would still make manufacturers salivate compared to 256-bit.
https://forums.guru3d.com/data/avatars/m/198/198862.jpg
32gbps is crazy fast. Next gen gonna be interesting.
https://forums.guru3d.com/data/avatars/m/260/260828.jpg
Kaarme:

Nvidia and AMD would really appreciate more than 16Gb per chip, so that they could offer a larger memory capacity with fewer chips and an even narrower bus, while still keeping it single-sided. 16Gb chips take 8 whole units for 16GB of VRAM, making a whopping (haha, right) 256-bit bus necessary. Of course with 24Gb chips it would be kind of difficult to have 16GB, but you could have 15GB or 18GB, allowing a more pleasant (for the manufacturers, not for the customers) 160-bit bus for 15GB, or for 18GB the more familiar 192-bit bus, which would still make manufacturers salivate compared to 256-bit.
While i hate that gpu manufacturers keep narrowing the bus, this is great for mobile where there is not much space
https://forums.guru3d.com/data/avatars/m/79/79740.jpg
Micron and Hynix also have GDDR7 incoming in 2024. Hope competition keeps prices in check.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Picolete:

While i hate that gpu manufacturers keep narrowing the bus, this is great for mobile where there is not much space
All that should matter is the efficiency, price, and total bandwidth. So long as none of those are made worse, it doesn't matter what the bus width is.
https://forums.guru3d.com/data/avatars/m/248/248627.jpg
This Is what the garbage 4060ti desperately needed, it's almost as if it was originally designed for gddr7 but we know it's just nvidia being greedy these were nowhere near ready. This is the most exciting thing for the gpu market in a while if the price to performance goes up and actually gets passed along rather than the greed kicking in.
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
schmidtbag:

All that should matter is the efficiency, price, and total bandwidth. So long as none of those are made worse, it doesn't matter what the bus width is.
Too bad the manufacturers don't care at all about that principle. They will gladly sacrifice bandwidth if given a chance. AMD innovating the way to partially compensate with extra large cache was an unholy grail that allowed cutting bandwidth like there's no tomorrow. Between 3080 12GB and my 4070 12GB, the bandwidth I have is over 40% less. A bit more merciful compared to the original 3080, but it only has 10GB. But then again, 4070 barely manages to reach 3080 level of performance, so maybe it's still the more relevant comparison. Price comparisons are kind of meaningless since 3070's price got bloated by the crypto scheme (but I never heard anyone praising RTX 4000 prices). Efficiency at least is there, thanks to TSMC's more advanced process tech.
https://forums.guru3d.com/data/avatars/m/274/274577.jpg
here comes 32bit memory buses lol
https://forums.guru3d.com/data/avatars/m/250/250418.jpg
128bit bus GPUs suck. Even my old RX580 has 256bit bus! I had a 128bit GPU, the RX560, and oh boy it sucked on Unreal Engine tittles: even with setting all on low it would stutter. The GPU performance wasn't bad , but the memory just couldn't handle the games.
https://forums.guru3d.com/data/avatars/m/115/115616.jpg
I'd be more concerned with PAM rather than the potential bus width. You'll need a very stable electrical design. Good GPU, good VRMs, good PCB. This sounds like a price increase... which can be cut on the cheaper cards (how can 4070 & successors be considered cheaper?) by the fact the modules are 2GB each, and you can cut the bus width with such bandwidth...