SK hynix Announces HBM3 DRAM Development

Published by

Click here to post a comment for SK hynix Announces HBM3 DRAM Development on our message forum
https://forums.guru3d.com/data/avatars/m/178/178348.jpg
HBM has such high promises, but sadly it's just too expensive.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
So, does this imply they'll be on DIMMs? Also, with upcoming servers having DDR5 and 12 memory channels, how much more bandwidth is really necessary to justify the cost of HBM3? Keep in mind too Samsung will be releasing 512GB modules for DDR5, so HBM3 capping at 24GB seems rather... unimpressive for a server. HBM to me makes a whole lot more sense for APUs, where there aren't many memory channels and you don't need TB of memory, but there is a desperate need for more bandwidth.
data/avatar/default/avatar14.webp
schmidtbag:

So, does this imply they'll be on DIMMs? Also, with upcoming servers having DDR5 and 12 memory channels, how much more bandwidth is really necessary to justify the cost of HBM3? Keep in mind too Samsung will be releasing 512GB modules for DDR5, so HBM3 capping at 24GB seems rather... unimpressive for a server. HBM to me makes a whole lot more sense for APUs, where there aren't many memory channels and you don't need TB of memory, but there is a desperate need for more bandwidth.
HBM3 is probably aimed at the high end GPU market and not intended as a RAM type for general purpose RAM. If I'm not mistaken the 24GB cap is per stack and most solutions that would use HBM3 will use multiple stacks. For instance the A100 cards we use here have 5 stacks of 8GB HBM2 RAM (technically 6, but one is disabled for yield reasons) for a total of 40GB. There is now a variant of the A100 that uses 16 GB stacks of HBM2E memory for a total of 80 GB. So future high end GPU's might deploy with 5 stacks of 24 GB HBM3 memory or 120 GB of VRAM!
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Crazy Joe:

HBM3 is probably aimed at the high end GPU market and not intended as a RAM type for general purpose RAM. If I'm not mistaken the 24GB cap is per stack and most solutions that would use HBM3 will use multiple stacks. For instance the A100 cards we use here have 5 stacks of 8GB HBM2 RAM (technically 6, but one is disabled for yield reasons) for a total of 40GB. There is now a variant of the A100 that uses 16 GB stacks of HBM2E memory for a total of 80 GB. So future high end GPU's might deploy with 5 stacks of 24 GB HBM3 memory or 120 GB of VRAM!
Yeah I suppose you're right. Re-reading the article, I'm not sure where I was getting the impression that I thought this was just going to be for desktop CPUs. But yeah for GPUs, this could be great.
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
@Crazy Joe i agree on an apu hbm will be insanely good .... The problem is that it is to expensive to say stick a chip to 5600g for the price range of the cpu , witch is a shame 🙁