Samsung GDDR6 memory announcement spotted - going for 16 Gbps

Published by

Click here to post a comment for Samsung GDDR6 memory announcement spotted - going for 16 Gbps on our message forum
https://forums.guru3d.com/data/avatars/m/72/72830.jpg
Well, it's not solely used by AMD but they were largely the reason it exists to begin with. "Though it didn’t garner much attention at the time, in 2011 AMD and memory manufacturer Hynix (now SK Hynix) publicly announced plans to work together on the development and deployment of a next generation memory standard: High Bandwidth Memory (HBM). Essentially pitched as the successor to GDDR, HBM would implement some very significant changes in the working of memory in order to further improve memory bandwidth and turn back the dial on memory power consumption." https://www.anandtech.com/show/9266/amd-hbm-deep-dive
data/avatar/default/avatar36.webp
How does GDDR6 compare to HBMv2?
https://forums.guru3d.com/data/avatars/m/263/263710.jpg
Waiting for HYNIX GDDR6 as Nvidia upcoming GPUs "maybe" will be using them
https://forums.guru3d.com/data/avatars/m/237/237771.jpg
schmidtbag:

I'm not; I don't care if Vega doesn't really take advantage of it, because that wasn't relevant to my point. HBM isn't an AMD technology (after all, Nvidia is considering using it too), but you're the one who decided to bring them up. I'm talking strictly about numbers, and I wasn't even referring to real-world performance.
Communism works great in theory. BTW Nvidia is already using HBM in their pro cards.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Loophole35:

Communism works great in theory.
Actually it's pretty obviously terrible even in theory. Regardless, you are trying WAY too hard to argue with me, and for what? What part of "I'm strictly talking about numbers" do you not get? I'm not saying what's actually better, or more practical. I'm not referring to price, or whoever is using it. I'm merely pointing out how the stark difference between HBM's bandwidth vs GDDR6. Y'know what does work great in reality? Not assuming everyone is a complete idiot.
https://forums.guru3d.com/data/avatars/m/72/72830.jpg
It's off topic but Marx is more argued about rather than read. Which is also he said "If this is Marxism then I'm not a Marxist." Orwell tried to destroy the myth of USSR socialism with animal farm. To get back on topic, I hope AMD jumps on this if it is cheaper and has better availability than HMB2. And so, memory speeds again are going up, what about the GPUs?
https://forums.guru3d.com/data/avatars/m/237/237771.jpg
schmidtbag:

Actually it's pretty obviously terrible even in theory. Regardless, you are trying WAY too hard to argue with me, and for what? What part of "I'm strictly talking about numbers" do you not get? I'm not saying what's actually better, or more practical. I'm not referring to price, or whoever is using it. I'm merely pointing out how the stark difference between HBM's bandwidth vs GDDR6. Y'know what does work great in reality? Not assuming everyone is a complete idiot.
Now you are claiming I'm calling you an idiot. Please show me where I said that. I think you need to step back and chill. You said HBM should be used in high end. I simply pointed out that GDDR is still more viable in high end by using "GASP" examples. I'm not the one trying way to hard here. In fact it was pretty easy to provide the info to substantiate my claim. Now you intend to back track and ignore that HBM in gaming has not translated to any advantage over GDDR. Here I will remind you of your original statement.
schmidtbag:

Seems so slow compared to HBM2, but still a major improvement. Might be worth using for mid-range hardware, whereas HBM is best used in high-end and GDDR5 for everything else.
AGAIN I compared the only available consumer level HBM2 equipped card (Vega) to another card with GDDR5X (1080ti) that have the same memory bandwidth with only 11Gb/s memory. So please explain why somthing that is 5Gb/s FASTER than currently equal performance to HBM2 equipped card is slower.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Loophole35:

Now you are claiming I'm calling you an idiot. Please show me where I said that. I think you need to step back and chill.
I didn't say you said that, but every comment you made against me so far had some condescension in it. If that wasn't your intention, well, you have a knack for being needlessly antagonistic in general (toward anyone), so I'm going to assume the worst of intentions when something comes from you specifically. The fact I don't even disagree with you and you're still trying to argue with me is enough proof of that.
You said HBM should be used in high end. I simply pointed out that GDDR is still more viable in high end by using "GASP" examples. I'm not the one trying way to hard here. In fact it was pretty easy to provide the info to substantiate my claim. Now you intend to back track and ignore that HBM in gaming has not translated to any advantage over GDDR.
Despite how many times I have to tell you I don't care, you keep pointing out examples as though it matters to me or somehow disproves my point. To remind you, my point ultimately comes down to nothing more than "these have very different numbers". You are obsessing over something I don't even disagree with. You're right - many things don't have a need for HBM2 [right now]. I have no problems with your examples. You're the one making a big deal out of this. I'm merely making speculations about future products. Why is that so offensive and objectionable to you?
AGAIN I compared the only available consumer level HBM2 equipped card (Vega) to another card with GDDR5X (1080ti) that have the same memory bandwidth with only 11Gb/s memory. So please explain why somthing that is 5Gb/s FASTER than currently equal performance to HBM2 equipped card is slower.
Gee, I don't know, maybe because I wasn't referring to what's currently available to the consumer? Perhaps you should realize Vega isn't a perfect example of how to properly use HBM, and maybe use the Quadros as an example instead? Note that GDDR6 isn't available yet, so considering the context of its usage implies future products, I was also considering the usage of HBM in future products; anyone's products, not just AMD (who you are oddly focused on). Volta, for example, is supposed to be a pretty big deal. There are already rumors that the Titans will use HBM2. That's a high-end product, hence, making my original statement a [possible] reality. Why are you fighting something so petty?
https://forums.guru3d.com/data/avatars/m/108/108389.jpg
heh no one is mentioning the power consumption numbers ? HBM use maybe like 1/4 the wattage of current Gddr5. Had vega used Gddr5 then the addition of another 60w to its power consumption make it look even less appetizing. Also HBM is easier to cool as well as it's sitting right next to the core
https://forums.guru3d.com/data/avatars/m/237/237771.jpg
Krizby:

heh no one is mentioning the power consumption numbers ? HBM use maybe like 1/4 the wattage of current Gddr5. Had vega used Gddr5 then the addition of another 60w to its power consumption make it look even less appetizing. Also HBM is easier to cool as well as it's sitting right next to the core
Actually HBM2 does not use that much less than GDDR5 and GDDR does not use according to you 90w.
https://forums.guru3d.com/data/avatars/m/63/63170.jpg
Loophole35:

Um????? This 16Gb/s would equal 406Gb bandwidth on a 256 bus. That is almost equal to Vega's 2 stacks of HBM2. I would venture to say the 16Gb/s is to be the standard and would not be surprised if we see variants around 18Gb/s. 16Gb/s would be just over 600Gb of memory bandwidth on a 384bit interface. Like it or not supply and cost will keep GDDR very relevant for years to come. Plus the only perceivable advantage of HBM was not even on display with vega, that card was damn near as big as the 1080ti. In short HBM as of now should be reserved for professional cards and continue to use GDDR for gamer cards, which Nvidia will likely do but AMD will likely not.
Because GDDR6/6X is going to be going up against HBM3 ? Neither GDDR6 or HBM3 is available or being used yet. HBM2 equals the best GDDR5/5X there is. currently.
https://forums.guru3d.com/data/avatars/m/93/93080.jpg
Evildead666:

Because GDDR6/6X is going to be going up against HBM3 ? Neither GDDR6 or HBM3 is available or being used yet. HBM2 equals the best GDDR5/5X there is. currently.
Yes and no. Spec wise on paper it should be tearing up even GDDR5X. But it isn't. The programing with software and other hardware sync just isn't there. Intel is trying to do something about it. But you know, there is no surefire difference as of yet. GDDR6 is fine still. If Intel can pull off the HBM connection, then maybe we will see Nvidia go to HBM after GDDR6. I won't rule out before GDDR7. But not before I don't think.
data/avatar/default/avatar18.webp
DDR4 prices stopped me from even considering building a PC right now - I got 16gb of 1866 ddr3 from amazon for my laptop for £60 (admittedly an unusually great deal)....even cheap desktop ddr4 is almost 3 times that cost, there are no good deals anywhere, and I'd be wanting at least 32GB, which would be over 300 for just the basic stuff (i need amount not speed)...just mental, horrible prices.
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
RedSquirrel:

DDR4 prices stopped me from even considering building a PC right now - I got 16gb of 1866 ddr3 from amazon for my laptop for £60 (admittedly an unusually great deal)....even cheap desktop ddr4 is almost 3 times that cost, there are no good deals anywhere, and I'd be wanting at least 32GB, which would be over 300 for just the basic stuff (i need amount not speed)...just mental, horrible prices.
yes prices are tooooo high , and i would agree i would choose slow but enough amount of ram over ultra fast not enough 100% of the time, except if there is the option for fast and enough on a reasonable price!
https://forums.guru3d.com/data/avatars/m/201/201426.jpg
RedSquirrel:

DDR4 prices stopped me from even considering building a PC right now - I got 16gb of 1866 ddr3 from amazon for my laptop for £60 (admittedly an unusually great deal)....even cheap desktop ddr4 is almost 3 times that cost, there are no good deals anywhere, and I'd be wanting at least 32GB, which would be over 300 for just the basic stuff (i need amount not speed)...just mental, horrible prices.
Exactly. If I hadnt already committed to building my ryzen rig, I would have stuck with my DDR3 setup. It cost a fracking $170 for a 2x8GB 16GB 3000mhz kit. Personally, I like HBM more due to how small the PCB of the gpus can be. Loved the size of my Fury X when I had it. Its different and not the norm. AMD doesnt always win with doing stuff different, but at least they do try to do different stuff. I almost feel like Nvidia is the apple of gpus anymore I cant stand nvidias lack of innovation IMHO. They personally stifle PC games beyond epeen FPS bragging rights outside of people with super high refresh rate monitors. Remember how Nvidia was riding the GDDR3 train like hell. Putting 512Bit interface on the GTX 280 with GGDR3 and same with GTX 260 having 448bit interface. While the HD 4870 was on GDDR5 already with a 256bit interface and packed the same amount of bandwith as the GTX 280.
https://forums.guru3d.com/data/avatars/m/197/197287.jpg
Evildead666:

Because GDDR6/6X is going to be going up against HBM3 ? Neither GDDR6 or HBM3 is available or being used yet. HBM2 equals the best GDDR5/5X there is. currently.
Except, wrong. What is available, sure, but if they were to make a card with 12GB of memory just like the top end nvidia cards, regardless of costs, HBM2 would immediately be GDDR6 bandwidths. Let alone higher amounts. What's holding HMB2 back is costs and supply. That's it. If those two things, especially the 2nd, were not a problem, then no one would be on this nonsense idea that GDDR5/x meets HBM2, as that's a nonsense notion due to the fact that no one can make that claim since it will be true in some scenarios and not true on others. HBM2 with 12GB, 3x4GB stacks, would be around 700Gbps bandwidth. This is a fact. And the only reason we don't get this is because of costs(ish) and more importantly supply. Instead we are stuck with 8GB cards, which completely out do any 8GB card otherwise in the market. HBM2 Vega 8GB card = 483.8Gbps GeForce GTX 1080 8GB (GDDR5x) = 320/352Gbps Radeon RX 580 8GB card (GDDR5) = 256Gbps The only real reason GDDR5/x/6 has a place in the higher end market of graphics card is due to HBM2 not being widely available, plain and simple. And that's just HBM2. Imagine when HMB3 comes out, if it's twice as fast as claimed (no reason to not expect it), that'd mean by the time that gets out, 12GB will likely be more normal (just look how fast graphics cards memory size has gone up, it really wasn't that long ago when a GeForce GTX 780 Ti had 3GB of memory, the same as the lower end GTX 1060 today, remember there were no 800 series, so that was only 3 generations ago) and it'd have around 1400Gbps bandwidth(8Gb, two stacks would have 800Gbps bandwidth), or 1.4Tbps of bandwidth. What would be required to have GDDR6 be 1.4Tbps bandwidth? You'd need a 384 bit bus with a memory speed of 29Gbps 29Gbps...... Sorry, i don't see that happening. Not till far beyond 2020, and probably not till something like GDDR8/9 As i said earlier in the thread, GDDR memory just isn't interesting, it crawls up the ladder of speed, rather them leaping and being innovative. GDDR5x is already capable, technically, of 16Gbps speed, and yet, GDDR6 is starting right there.
Loophole35:

Actually HBM2 does not use that much less than GDDR5 and GDDR does not use according to you 90w.
Where did you come up with this? HBM had lower power consumption then GDDR5 and GDDR5x, and HBM2 has lower power consumption then HBM. I'll grant you, i can't find the specifics to say a Vega (consumer)HBM2 card and a GDDR5x card, but here is some info. "Vega: Frontier Edition’s 16GB HBM2 pulls 20W max, using a DMM to determine this consumption. And no more than an additional 10W for the controller – that’s less than 30W for the entire memory system on Vega: Frontier Edition. We also know that an RX 480 uses 40-50W for its 8GB, which is already a significant increase in power consumption per-GB over Vega: FE. The RX 480 also has a memory bandwidth of 256GB/s with 8GB GDDR5, versus Vega 64’s 484GB/s. The result is increased bandwidth, the same capacity, and lower power consumption, but at higher cost to build. In order for an RX 480 to hypothetically reach similar bandwidth, power consumption would increase significantly. Buildzoid calculates that a hypothetical 384-bit GDDR5 bus on Polaris architecture would push 60-75W, and an imaginary 512-bit bus would do 80-100W." I'm all ears on if there's better information out there, but there is really no reason HBM2 should not be using less power, significantly. Another website, as well shows that a HBM1 used about half the wattage that GDDR5 did at the time, and that GDDR5x was supposed to shave around 10% on wattage compared to GDDR5, which still does not put it anywhere near HBM1, let alone HBM2.
data/avatar/default/avatar27.webp
I'm looking forward to go green next year.
https://forums.guru3d.com/data/avatars/m/237/237771.jpg
Aura89:

Where did you come up with this? HBM had lower power consumption then GDDR5 and GDDR5x, and HBM2 has lower power consumption then HBM. I'll grant you, i can't find the specifics to say a Vega (consumer)HBM2 card and a GDDR5x card, but here is some info. "Vega: Frontier Edition’s 16GB HBM2 pulls 20W max, using a DMM to determine this consumption. And no more than an additional 10W for the controller – that’s less than 30W for the entire memory system on Vega: Frontier Edition. We also know that an RX 480 uses 40-50W for its 8GB, which is already a significant increase in power consumption per-GB over Vega: FE. The RX 480 also has a memory bandwidth of 256GB/s with 8GB GDDR5, versus Vega 64’s 484GB/s. The result is increased bandwidth, the same capacity, and lower power consumption, but at higher cost to build. In order for an RX 480 to hypothetically reach similar bandwidth, power consumption would increase significantly. Buildzoid calculates that a hypothetical 384-bit GDDR5 bus on Polaris architecture would push 60-75W, and an imaginary 512-bit bus would do 80-100W." I'm all ears on if there's better information out there, but there is really no reason HBM2 should not be using less power, significantly. Another website, as well shows that a HBM1 used about half the wattage that GDDR5 did at the time, and that GDDR5x was supposed to shave around 10% on wattage compared to GDDR5, which still does not put it anywhere near HBM1, let alone HBM2.
Denial posted some info on it a while back in the AMD thread showing that the intended speed of Vega's HBM2 would actually use the same amount or slightly more than Fiji. If those HBM wattage numbers are accurate then GP102 core uses just 165-180W and Vega uses 250-270W and while AMD's cards tend to use more power that is a bit excessive. I do agree that the main reason GDDR is being use is cost and supply (not just materials but the cost to manufacture is higher now).