SK Hynix showcases first GDDR6 - Double The Bandwidth

Published by

Click here to post a comment for SK Hynix showcases first GDDR6 - Double The Bandwidth on our message forum
data/avatar/default/avatar26.webp
Nice! Is it only me, or does HBM(2) start to look more and more like an infamous "RDRAM" of modern days?
data/avatar/default/avatar30.webp
GDDR6 is for Hynix only!?? 😕 I don't like this brand "bugged" because EVGA GTX 580 3GB got freeze in some games while EVGA GTX 780 Ti Classified got render failed at many times with every minutes or hours.
https://forums.guru3d.com/data/avatars/m/63/63215.jpg
"uneiled" face-palm. Anyway, it's looking great.
https://forums.guru3d.com/data/avatars/m/248/248902.jpg
Nice! Is it only me, or does HBM(2) start to look more and more like an infamous "RDRAM" of modern days?
Nope. HBM is much more useful. I'm willing to bet it's THE future.
data/avatar/default/avatar36.webp
Nope. HBM is much more useful. I'm willing to bet it's THE future.
I don't see it this way. We are already 2 years since HBM became a reality. It is still VERY limited, it has multiple issues - price, availability, performance, implementation difficulties. Yes, it has some advantages versus GDDR5\5x\6, but they all are being universally negated. If you remember - RDRAM had some advantages over DDR back in the days, and virtually the same issues... we all know how it turned out in the end. In my view, if HBM survives, it will remain an ultra niche type of memory for some non-end-user computing devices.
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
Nice! Is it only me, or does HBM(2) start to look more and more like an infamous "RDRAM" of modern days?
I don't see it this way. We are already 2 years since HBM became a reality. It is still VERY limited, it has multiple issues - price, availability, performance, implementation difficulties. Yes, it has some advantages versus GDDR5\5x\6, but they all are being universally negated. If you remember - RDRAM had some advantages over DDR back in the days, and virtually the same issues... we all know how it turned out in the end. In my view, if HBM survives, it will remain an ultra niche type of memory for some non-end-user computing devices.
It's only you. Even NVIDIA (despite no having any patents on it and having to pay interposer patents to AMD) is using HBM in products that do require high performance in the memory subsystem. The actual latencies and the effective bandwidth you get out of HBM are much higher than anything that a traditional memory might offer. I find it interesting that these news come from Hynix, who seem to have a big hand in HBM too.
data/avatar/default/avatar25.webp
HBM looks really impressive to me. This video shows the true benefits of it. youtube.com/watch?v=85ProuqAof0[/url]
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
It's only you. Even NVIDIA (despite no having any patents on it and having to pay interposer patents to AMD) is using HBM in products that do require high performance in the memory subsystem. The actual latencies and the effective bandwidth you get out of HBM are much higher than anything that a traditional memory might offer. I find it interesting that these news come from Hynix, who seem to have a big hand in HBM too.
Yep, latency is a huge factor, especially with deep learning/compute workloads. One thing people forget is size too: http://i.imgur.com/ffXpyBf.jpg Being able to have a small contained GPU core with memory onboard allows it to be stacked/configured far better than a giant PCB with GDDR. HBM/Stacked ram is definitely the future. May take a while for all consumer boards to have it but it will come eventually.
https://forums.guru3d.com/data/avatars/m/269/269912.jpg
Nope. HBM is much more useful. I'm willing to bet it's THE future.
AMD hinted that their Ryzen cpu has been designed to allow adding of hbm to the die in future Ryzen generation cpu's. HBM3 is already here and I do not foresee HBM dying a slow death. Cost right now is the foot-dragger, and as it becomes mass produced and consumed, the price should fall in line with the ssd scenario of pricing.
https://forums.guru3d.com/data/avatars/m/237/237771.jpg
HBM is the future but until it is more mature GDDR will be a main stay in the consumer level for a few more years expecialy with the bandwidth improvements we see here. Now I understand latency is higher on GDDR6 than HBM however considering we tend to focus more on gaming than deep learning it is almost a non factor. HBM will be on professional and enthusiast products exclusively for I'd say about 3-4 more years before we see a mid range product with it ($300 and lower).
https://forums.guru3d.com/data/avatars/m/187/187573.jpg
I would think that the design concept of HBM would allow it to be implemented on CPUs in the future (not just GPUs) and thus change the landscape drastically as it would integrate system ram on the actual CPU chip itself...
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Hynix with GDDR is to HBM as Western Digital with HDDs is to SSDs. GDDR6 is appealing because it [probably] is a cheaper option to HBM without being inadequate, just as HDDs are a cheaper option for extra capacity compared to SSDs. Both GDDR6 and HDDs are technically worse, but they still have good reasons to exist. It'd be interesting to see two models of the same GPU with GDDR6 vs HBM, though I don't expect we'll ever see that. The fact that Nvidia wants access to HBM shows promise in the technology.
data/avatar/default/avatar16.webp
It's only you. Even NVIDIA (despite no having any patents on it and having to pay interposer patents to AMD) is using HBM in products that do require high performance in the memory subsystem. The actual latencies and the effective bandwidth you get out of HBM are much higher than anything that a traditional memory might offer. I find it interesting that these news come from Hynix, who seem to have a big hand in HBM too.
My intuition is this: HBM is a "leap" in performance, companies don't like "leaps", they want to "milk" gddr until the last drop, similarly to Intel who didn't bring a new architecture for more than 5 years. (I'm not saying these 2 are the exact same thing but u get my point.)
Hynix with GDDR is to HBM as Western Digital with HDDs is to SSDs. GDDR6 is appealing because it [probably] is a cheaper option to HBM without being inadequate, just as HDDs are a cheaper option for extra capacity compared to SSDs. Both GDDR6 and HDDs are technically worse, but they still have good reasons to exist. It'd be interesting to see two models of the same GPU with GDDR6 vs HBM, though I don't expect we'll ever see that. The fact that Nvidia wants access to HBM shows promise in the technology.
I would like to know how much expensive it actually is, and if that cost is inherent from HBM design or generated as a consequence of low investment in the manufacturing process, maybe to keep selling gddr for as long as they can before switching. As a side note, I get your point and u probably don't mean it a literal/out-of-context way but "HDDs are technically worse (than ssd)" is not true. HDD density won't be beaten for some years and density/cost will probably never be. HDD also have better thermal and "unpowered time" endurance.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
similarly to Intel who didn't bring a new architecture for more than 5 years.
You mean 9 years? The "Core i#" series was first released in 2008 and though Kaby Lake is tremendously better than Nehalem, it's still fundamentally the same architecture.
I would like to know how much expensive it actually is, and if that cost is inherent from HBM design or generated as a consequence of low investment in the manufacturing process, maybe to keep selling gddr for as long as they can before switching. As a side note, I get your point and u probably don't mean it a literal/out-of-context way but "HDDs are technically worse (than ssd)" is not true. HDD density won't be beaten for some years and density/cost will probably never be. HDD also have better thermal and "unpowered time" endurance.
Due to HBM being more "3D", I'm guessing it is inherently more expensive to make than GDDR. By how much, I have no idea. It's the same idea as DVD vs CD - the materials and basic functionality aren't really any different, but a DVD has multiple layers that data is written on. Buying a pack of DVDs is pretty much always more expensive than CDs, despite the low demand and abundance. And yes, I HDDs definitely win in terms of capacity-per-dollar (or just capacity in general) and will continue to do so for a while. As stated before, there are good reasons for them to still exist, which is why companies like Western Digital still make them despite their technical inferiority. But once you look at sub-terabyte sizes, SSDs are usually the best way to go.
data/avatar/default/avatar38.webp
My intuition is this: HBM is a "leap" in performance, companies don't like "leaps", they want to "milk" gddr until the last drop, similarly to Intel who didn't bring a new architecture for more than 5 years. (I'm not saying these 2 are the exact same thing but u get my point.) I would like to know how much expensive it actually is, and if that cost is inherent from HBM design or generated as a consequence of low investment in the manufacturing process, maybe to keep selling gddr for as long as they can before switching. As a side note, I get your point and u probably don't mean it a literal/out-of-context way but "HDDs are technically worse (than ssd)" is not true. HDD density won't be beaten for some years and density/cost will probably never be. HDD also have better thermal and "unpowered time" endurance.
GDDR5X is just as fast as the 1st generation HBM. If the company behind GDDR can keep up with HBM and GDDR6 is close to HBM2 in terms of performance or faster I can't see Nvidia making the switch.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
My intuition is this: HBM is a "leap" in performance, companies don't like "leaps", they want to "milk" gddr until the last drop, similarly to Intel who didn't bring a new architecture for more than 5 years. (I'm not saying these 2 are the exact same thing but u get my point.) I would like to know how much expensive it actually is, and if that cost is inherent from HBM design or generated as a consequence of low investment in the manufacturing process, maybe to keep selling gddr for as long as they can before switching. As a side note, I get your point and u probably don't mean it a literal/out-of-context way but "HDDs are technically worse (than ssd)" is not true. HDD density won't be beaten for some years and density/cost will probably never be. HDD also have better thermal and "unpowered time" endurance.
http://electroiq.com/insights-from-leading-edge/wp-content/uploads/sites/4/2016/03/Sys-plus-1.jpg This kind of shows the cost breakdown - but it doesn't break out the cost of the HBM module itself and the manufacturing process. Vega, at least the ones we've seen only have 2 stacks - so manufacturing should be a little cheaper than the 4 for Fury/GP100. But it's HBM2 and we don't know if the cost of those modules are higher/lower or what. AFAIK the most expensive part is the TSV mounting process. They essentially grow tens of thousands of crystals through each stack and if a single one fails they essentially lose the entire stack or bin it as a lower memory chip. The process itself is far more complex than mounting GDDR so no matter what it's going to be more expensive. HBM is a leap in performance for specific workloads but it doesn't really do much for gaming as far as we can measure - at least as long as there is sufficient memory in the first place. Vega has the cache controller which uses it's HBM a little differently and can boost minimums by a pretty awesome amount - but it only does that when it's out of memory in the first place. There could be future gaming implementations that utilize the latency reductions though. Bandwidth, given Vega's stack limit, isn't that much higher then what GDDR5x offers. Compared to the Ti it's not that much more. Power consumption of HBM2 scales differently too based on bandwidth: https://www.extremetech.com/wp-content/uploads/2016/02/NV-HB.png This is a little old - but depending on the implementation HBM can actually use more power overall. So there are some design/cost tradeoffs.
data/avatar/default/avatar04.webp
Interesting info. I didn't get the point with the power/bandwidth, HBM seems to scale a lot better so if u use more power with it, is because you are getting an insane bandwidth (unless gddr has changed it's projected curve since that graph was introduced.) The cost breakdown is a little weird (but might be ignorance on my side). All that extra pcb, cabling (lanes/whatever) and the work that needs to be mounted costs only 10? I would think just the extra copper for memory heatpipes would cost at least that :P Any idea why HBM is not being used in phones (premium ones at least)? My whole point is just "I think companies are not investing enough in HBM as they are comfortable with this (really) slow road map." (As a side note, I think the same is true for VR Headsets which still cost double of a whole console.)
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Any idea why HBM is not being used in phones (premium ones at least)?
I don't think most phones would benefit from it. Neither their CPUs nor GPUs are powerful enough to need anything more than GDDR5. I think some tablets (particularly Tegra ones) would benefit from HBM.
(As a side note, I think the same is true for VR Headsets which still cost double of a whole console.)
There are headsets that are affordable, if you're willing to make some sacrifices. I own an OSVR, which was $300. The HDK2 was recently released and that's $400 with a 2K display. You don't get the hand controllers, but you can get a LEAP Motion as a substitute. OSVR isn't as good of a platform and requires a lot more patience and technical knowledge, but it gets the job done.
https://forums.guru3d.com/data/avatars/m/56/56686.jpg
I would think that the design concept of HBM would allow it to be implemented on CPUs in the future (not just GPUs) and thus change the landscape drastically as it would integrate system ram on the actual CPU chip itself...
that would be great from performance stand point for both GPU/CPU but in case of cpu it would raise the prices then that would be offset by not have to by ram seperate and have mulitple sku for multiple different ram amounts.
data/avatar/default/avatar20.webp
I don't think most phones would benefit from it. Neither their CPUs nor GPUs are powerful enough to need anything more than GDDR5. I think some tablets (particularly Tegra ones) would benefit from HBM.
Just the reduced power consumption and size seems like a good enough reason. But well, probably the ram power drain accounts for 3% of the total (screens already account for 50% or more).