SK Hynix Presents HBM3 DRAM

Published by

Click here to post a comment for SK Hynix Presents HBM3 DRAM on our message forum
https://forums.guru3d.com/data/avatars/m/220/220214.jpg
Fastest DRAM, highest performance... sounds expensive Is this sort of DRAM used at all in gaming/workstation PC components yet? I thought it was supposed to improve GPUs by taking way less space on the boards, is HBM used in any GPUs or future ones?
data/avatar/default/avatar30.webp
geogan:

Fastest DRAM, highest performance... sounds expensive Is this sort of DRAM used at all in gaming/workstation PC components yet? I thought it was supposed to improve GPUs by taking way less space on the boards, is HBM used in any GPUs or future ones?
Remember the RadeonVII? It had HBM for the consumers. But HBM ist mostly found in workstation and server applications as in Nvidia Quadro/A-series GPUs and AMDs FirePro cards. Cheers! 🙂
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
VMax Tuner:

Remember the RadeonVII? It had HBM for the consumers. But HBM ist mostly found in workstation and server applications as in Nvidia Quadro/A-series GPUs and AMDs FirePro cards. Cheers! 🙂
It's not just the eccentric Radeon VII. Fury (X) and Vega (56/64) used HBM and HBM2. They were in quite widespread use among gamers, unlike Radeon VII. In fact, unless I'm entirely wrong, Fury X was the first card with HBM (or first anything with HBM). The professional cards only came afterwards. AMD was among the developers of HBM, even though these days Nvidia makes far more money by using it in the expensive professional cards, which outsell AMD's pro cards.
https://forums.guru3d.com/data/avatars/m/248/248627.jpg
It's really too bad hbm didn't take off in the consumer market I thought for sure amd was onto something with hbm capacity was the only limit in the beginning then when that was overcome the costs
https://forums.guru3d.com/data/avatars/m/270/270008.jpg
icedman:

It's really too bad hbm didn't take off in the consumer market I thought for sure amd was onto something with hbm capacity was the only limit in the beginning then when that was overcome the costs
The cost is what prevented it. Its still way too expensive to put in consumer devices. AMD did it on Vega but the margins were not good at all. If the costs can come down it would be great on APU's and GPU's.
https://forums.guru3d.com/data/avatars/m/225/225084.jpg
I can imagine a future gaming console with a very powerful APU running 24gbs HBM3 and all the latest PCIe hardware including Gen5 M.2 ssd's and some very fast DDR5/6. It'll cost you $1200 - $1500 depending on which spec you buy. This next console will be on par with very high end PC's, not quiet enthusiast level but high enough to maybe do 8k 30hz or 4k 120+hz. The next gen Xbox may even have an ARC gpu with HBM in it. I think it's still 3 or 4 years before we get this new console war but it is coming as both MS and Sony are probably already working on them in secret.
https://forums.guru3d.com/data/avatars/m/248/248627.jpg
I think they need to figure out a way to make hbm work without the interposer so it can use bga and be directly put onto the pcb that might be the only way to make it viable financially and complexity wise. alot of the yield issues could be solved this way since the interposer and the process of connecting it all together i think is what makes it all so expensive, so many good gpus probably get thrown out due to bad hbm, bad gpu, or bad interposer making the entire unit bad.
https://forums.guru3d.com/data/avatars/m/197/197287.jpg
Reddoguk:

I can imagine a future gaming console with a very powerful APU running 24gbs HBM3 and all the latest PCIe hardware including Gen5 M.2 ssd's and some very fast DDR5/6.
Not sure the point of using HBM3 and DDR5/6 would be, other then a cost saving situation. Current consoles already dont use DDR, except for a very small amount on the ps5, but the xbox series doesnt use it at all.
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
icedman:

I think they need to figure out a way to make hbm work without the interposer so it can use bga and be directly put onto the pcb that might be the only way to make it viable financially and complexity wise. alot of the yield issues could be solved this way since the interposer and the process of connecting it all together i think is what makes it all so expensive, so many good gpus probably get thrown out due to bad hbm, bad gpu, or bad interposer making the entire unit bad.
The main point of HBM is the extremely wide memory bus (compared to DDR). So, it would be kind of illogical to try to get rid of it and place the chip on the PCB traditionally. As you have seen, over the years Nvidia and AMD have in fact tried to get rid of a wide bus with VRAM (GDDR). You don't see 512-bit bus width around anymore. Even 3090 has only 384-bit wide bus width. HBM was attractive because of that huge memory bus width, but it didn't out to be economically very viable for gaming cards. It still is for professional cards where price isn't as critical a factor, compared to productivity. I reckon it would be more realistic to try to develop the interposer technology to be more reliable yet still cheaper, if there indeed are issues there.