Samsung Launches 3rd-generation (16GB) HBM2E

Published by

Click here to post a comment for Samsung Launches 3rd-generation (16GB) HBM2E on our message forum
https://forums.guru3d.com/data/avatars/m/198/198862.jpg
Any chance big navi could use this?
https://forums.guru3d.com/data/avatars/m/209/209146.jpg
Later on perhaps unless a earlier deal was already in place, launching these now would be a bit too late for the engineering stage of Navi20 short of a delay though who knows what deals might be in place with AMD and Samsung as one of their primary GPU RAM suppliers so there's a chance perhaps but I'm thinking it's a bit too late for how much time it takes from planning to fabrication and assembly for the hardware and how changes to steps in this will add a longer delay and having to re-test and redo stuff. 🙂 Late 2020 or early 2021 perhaps but mid 2020 launches if that's what might be planned for the next Navi GPU's (Nothing is confirmed though far as I know.) eh guess we'll see but there's other HBM2 and variants that would still be a improvement over what Vega had even if it's not these Gen3 chips just yet. 🙂 EDIT: Not that I would know but from how I see it and my own opinion on it at least, complications on what seems to be HBM integration and parts like the interposer and bridge to the GPU core wouldn't be a small bit either was even plans of some low cost HBM version I think that didn't use this bit though with some trade-offs as a result but less costly too. Which I assume is also a major part of HBM or GDDR for just how much of the total GPU cost these memory modules actually take up and thus overall pricing of the card itself to cover for this. (So more mid-range cards on HBM might still be a bit too early due to the costs and resulting price tag.) EDIT: Well that and gaming wise as Navi is more oriented towards HBM isn't a immediate benefit over GDDR6 although the GPU design itself could be more or less memory bottlenecked. Would be interesting to see and have numbers on a HBM Navi GPU though although it wouldn't be a direct comparison if it's high-end/enthusiast cards only like the rumored 5900's if that's reliable. (But who knows maybe Big Navi could also be compute oriented a bit but it seems AMD has plans for GCN yet.) Price wise perhaps rather than HBM vs GDDR since HBM has some nice improvements but GDDR is probably not going anywhere anytime soon. (Eh just speculative for how costly it is and other complications HBM involves even if there are benefits to using it instead of GDDR.)
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
Undying:

Any chance big navi could use this?
You'd think AMD would stick to HBM, seeing how AMD was one of the developers of HBM in the first place. Whether it's only for professional cards or also gaming, is a different thing. But for sure these will appear in Nvidia's best of the best professional products. I reckon Nvidia would use HBM in private consumer products only if they simply couldn't make GDDR work with a Titan/x080 Ti (taking also the power consumption into account). Ever since Maxwell, Nvidia has been better at handling GDDR and a narrower memory interface than AMD, though.
https://forums.guru3d.com/data/avatars/m/260/260114.jpg
Do we really need 32GB HBM2? IMhO 16GB will be plenty enough, even for 4k gaming. I would prefer 4x stack over 2x stack any time... (4x4GB or 2x8GB or 4x8GB).
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
This will be used on Ampere
OnnA:

Do we really need 32GB HBM2?
Yes
data/avatar/default/avatar35.webp
It will probably be use on Nvidia next Tesla product, the one rumored to be 70 % faster than Tesla V100, probably with 4 stacks, I don't think we will see it on a consumer product (both from Nvidia or AMD).
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
sbacchetta:

It will probably be use on Nvidia next Tesla product, the one rumored to be 70 % faster than Tesla V100, probably with 4 stacks, I don't think we will see it on a consumer product (both from Nvidia or AMD).
Yeah, I don't believe Radeon VII was a mega success for AMD and the profit margin likely was too slim, so at least it would be kind of strange to get another professional card turned into a consumer card like that.
https://forums.guru3d.com/data/avatars/m/66/66148.jpg
Kaarme:

Yeah, I don't believe Radeon VII was a mega success for AMD and the profit margin likely was too slim, so at least it would be kind of strange to get another professional card turned into a consumer card like that.
Radeon VII only exists due to the excessive cost of Nvidia's competition. Had Nvidia not done it's price hike the R VII wouldn't have been made.
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
OnnA:

Do we really need 32GB HBM2? IMhO 16GB will be plenty enough, even for 4k gaming. I would prefer 4x stack over 2x stack any time... (4x4GB or 2x8GB or 4x8GB).
In gaming , no . In any deep learning / ai/ neural network environment they will literally chew up as much as you can give em every last bit
https://forums.guru3d.com/data/avatars/m/250/250418.jpg
Undying:

Any chance big navi could use this?
This is server stuff, not even Nvidia flagship will use HBM, soon.
https://forums.guru3d.com/data/avatars/m/198/198862.jpg
Silva:

This is server stuff, not even Nvidia flagship will use HBM, soon.
Well those high resolutions like sweet 8k just need this insane vram buffer and bandwidth for all that data. Amd started using hbm since fury so its not impossible.
data/avatar/default/avatar27.webp
Undying:

Any chance big navi could use this?
HBM is possible, since they use It in Vega cards, although It is expensive.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
I figure AMD will use HBM for their x900 series GPUs, maybe x800. But it isn't cost effective for parts lower than that.
OnnA:

Do we really need 32GB HBM2? IMhO 16GB will be plenty enough, even for 4k gaming. I would prefer 4x stack over 2x stack any time... (4x4GB or 2x8GB or 4x8GB).
I'd say HBM is most suited for 12GB+ configurations. When it comes to GPUs, bandwidth needs go up with memory capacity needs. Memory capacity needs go up as you add more cores. For professionals working in 4K and higher, I could see 32GB being a necessity. For certain server compute workloads, I could also see 32GB being useful. For gamers, no - 32GB isn't necessary any time soon. We can barely run games at max detail in 4K above 60FPS (let alone at a reasonable price) and a lot of games can make due with 8GB.
https://forums.guru3d.com/data/avatars/m/56/56686.jpg
HBM i hear alot about but not really used by much of anything, this gona be come the new rambus ram?
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
tsunami231:

HBM i hear alot about but not really used by much of anything, this gona be come the new rambus ram?
Haha. Nvidia P100 used it, V100 uses it, A/H100 will use it. Those are the goldmines for Nvidia. AMD also used and is using it to various levels of success. Nobody liked Rambus. I don't know why you'd compare HBM to that failure. HBM never had any problems and is working up to the specs and expectations. It's just not as cost efficient for private consumer use as GDDR, plus the supply was more limited, which is why Nvidia never bothered with it outside of the professional cards. However, it's serving Nvidia excellently in the expensive data center and such solutions.
https://forums.guru3d.com/data/avatars/m/56/56686.jpg
till such time where this all happens is just much more expensive ram that is currently being ignored for GDDR6 for majority of GPU's
https://forums.guru3d.com/data/avatars/m/181/181063.jpg
Undying:

Any chance big navi could use this?
I don't think so...because then it will be "Big Expensive Navi" HBM2 is ideal in situations such as AI and advanced computing because of the high bandwidth. GDDR6 offers a lot of the same performance, albeit with higher power requirements, but at a lower price thus making it more suitable for everyday consumer graphics. There is place on the market for both of them but I think that for us - "normal consumers" - hbm2 is still too expensive and maybe unnecessary compared with the performance needed in everyday tasks that can still be decently served by cheaper GDDR6.
https://forums.guru3d.com/data/avatars/m/156/156133.jpg
Moderator
Undying:

Any chance big navi could use this?
Doubt it. AMD only went to HBM because the power draw of their gpu and GDDR5X and GDDR6 together would have been way too high. HBM also isn't cheap.
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
vbetts:

Doubt it. AMD only went to HBM because the power draw of their gpu and GDDR5X and GDDR6 together would have been way too high. HBM also isn't cheap.
AMD was one of the original developers of HBM. I bet they felt like they had to use it after that. No doubt they also believed it would make a real difference. Unfortunately still sticking to the old GCN made the change in the memory insufficient when at the same time Nvidia went forward with Maxwell and Pascal.