SK Hynix showcases first GDDR6 - Double The Bandwidth

Published by

Click here to post a comment for SK Hynix showcases first GDDR6 - Double The Bandwidth on our message forum
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Just the reduced power consumption and size seems like a good enough reason. But well, probably the ram power drain accounts for 3% of the total (screens already account for 50% or more).
Agreed - battery life is important and somehow brands are like "hurr durr everyone wants the phone to be needlessly thin". HBM would also be good since it tends to take up less physical space, which HOPEFULLY means more room for a bigger battery.
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
HBM is the future but until it is more mature GDDR will be a main stay in the consumer level for a few more years expecialy with the bandwidth improvements we see here. Now I understand latency is higher on GDDR6 than HBM however considering we tend to focus more on gaming than deep learning it is almost a non factor.
Well, if you see the whole timeline, HBM first appeared in ultra-high end consumer products something like two years ago. Now with Vega the HBM threshold will become the mid-high end ~$300-400 (whatever AMD's Vega answer to the 1070 is going to be). In a couple of years in the future I could see it moving down to where Polaris is now, so it will become truly mainstream. Judging by Vega's memory controller I could even see mixed configurations, although that would depend on implementation costs.
GDDR5X is just as fast as the 1st generation HBM. If the company behind GDDR can keep up with HBM and GDDR6 is close to HBM2 in terms of performance or faster I can't see Nvidia making the switch.
NVIDIA has already made the switch in the performance segments that matter, they just won't use in the mainstream yet because there is no reason. There are very compelling reasons for HBM for high-end products, the main one being a huge amount of latency reduction and much increased actually usable bandwidth.
http://electroiq.com/insights-from-leading-edge/wp-content/uploads/sites/4/2016/03/Sys-plus-1.jpg This kind of shows the cost breakdown - but it doesn't break out the cost of the HBM module itself and the manufacturing process. Vega, at least the ones we've seen only have 2 stacks - so manufacturing should be a little cheaper than the 4 for Fury/GP100. But it's HBM2 and we don't know if the cost of those modules are higher/lower or what.
This looks much better than I expected actually. Isn't the whole point of HBM 2 that it's easier to implement due to higher capacities?
AFAIK the most expensive part is the TSV mounting process. They essentially grow tens of thousands of crystals through each stack and if a single one fails they essentially lose the entire stack or bin it as a lower memory chip. The process itself is far more complex than mounting GDDR so no matter what it's going to be more expensive.
I'm reading in Extremetech that although we don't know the exact costs of the whole process, Hynix seems to be having it down to a T, and that they are able to test the whole range of components required in every step of the assembly process, so no bad assemblies really happen.
The report also details how Hynix manufactured the TSVs and the process it used for creating them. One thing the authors note is that while they expected to see “scallops” in the images (scallops are ridges formed in the sidewall during the etching process), Hynix apparently did an excellent job avoiding the problem. Hynix, the author concludes, “has got a great etch recipe.” The arrangement of the dies on the stack suggests that the first three DRAM dies were diced (cut from the wafer) as a group, while the top DRAM chip was cut separately, tested, and then attached to the stack. The entire four-die stack would then have been attached to the logic die. The advantage of this kind of configuration is that it offers Hynix ample opportunity to confirm that it’s building good die before attaching them in the final product.
I also wonder about the cost of a larger PCB with traces for every GDDR memory module, vs printing what is effectively a PCB in a dirt-cheap 65nm process. If Hynix indeed has that good of an "etch recipe", then there is hope that with larger manufacturing scale, even cost might reach very very close to GDDR.
HBM is a leap in performance for specific workloads but it doesn't really do much for gaming as far as we can measure - at least as long as there is sufficient memory in the first place. Vega has the cache controller which uses it's HBM a little differently and can boost minimums by a pretty awesome amount - but it only does that when it's out of memory in the first place. There could be future gaming implementations that utilize the latency reductions though. Bandwidth, given Vega's stack limit, isn't that much higher then what GDDR5x offers. Compared to the Ti it's not that much more.
David Kanter had a lot of things to say about why GDDR isn't really what most of us see in a simple spec sheet.
GDDR5x is not particularly great. You do crank up the interface speed but you're not able to get full mileage out of it
Unfortunately they haven't uploaded the podcast to YouTube, but here's the mp3 link. The part about GDDR5/6 vs HBM starts at around 39:30.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
David Kanter had a lot of things to say about why GDDR isn't really what most of us see in a simple spec sheet. Unfortunately they haven't uploaded the podcast to YouTube, but here's the mp3 link. The part about GDDR5/6 vs HBM starts at around 39:30.
I love David Kanter. I'll check it out when I get home. And yeah, I think one of the design goals was to eliminate the number of stacks required for decent bandwidth/capacity in order to help alleviate the manufacturing cost. Pretty sure it was in one of the SK Hynix slides/brochure release thingy for it.
data/avatar/default/avatar12.webp
I highly doubt Nvidia will ever use HBM in a GeForce card. Guess we'll have to wait and see.
Thank you! Personally I think this might happen only with some Geforce Titan Uber-Crazy edition. And, like I said in my first post, HBM will remain an "ultra niche type of memory for some non-end-user computing devices". 2All others - you can of course believe GDDR is on a way out, but you are mistaken. Time will show.
https://forums.guru3d.com/data/avatars/m/197/197287.jpg
My intuition is this: HBM is a "leap" in performance, companies don't like "leaps", they want to "milk" gddr until the last drop, similarly to Intel who didn't bring a new architecture for more than 5 years. (I'm not saying these 2 are the exact same thing but u get my point.)
That whole sentence did not make sense. Companies love leaps, but they love profitability more. By your idea, if HBM cost half as much as GDDR and performed 4 times as much, companies wouldn't like it, because they would want to....milk GDDR4 as much as possible? What? Companies will always balance what get them to their performance goals/costs/profits/availability. To state there is anything else going on there is...nonsense.
2All others - you can of course believe GDDR is on a way out, but you are mistaken. Time will show.
Lol a bit full of yourself eh? Gotta love it when people purposefully put their foot in their mouths. Happens time and time throughout history, people state it won't happen, can't happen, etc. and state others are "mistaken". And time and time again, it happens, whatever it may be. Look at politics and science. Tell me, how can you state others are mistaken about what will happen in the future? are you a genie? do you have a magic globe? Have you traveled in time? No to all of the above? Well, you might want to get off your high horse then.
GDDR5X is just as fast as the 1st generation HBM. If the company behind GDDR can keep up with HBM and GDDR6 is close to HBM2 in terms of performance or faster I can't see Nvidia making the switch.
What are you smoking? There's almost a 100GB/s difference between GDDR5X and HBM first gen, and GDDR6 will trail HBM2 by quite a bit, let alone HBM3