Samsung GDDR6 memory announcement spotted - going for 16 Gbps

Published by

Click here to post a comment for Samsung GDDR6 memory announcement spotted - going for 16 Gbps on our message forum
https://forums.guru3d.com/data/avatars/m/237/237957.jpg
Love it but I've noticed that DDR 4 memory has gone up drastically in price recently, I wonder what's up with that?
https://forums.guru3d.com/data/avatars/m/249/249528.jpg
maize1951:

Love it but I've noticed that DDR 4 memory has gone up drastically in price recently, I wonder what's up with that?
I remember reading some months back here on guru3d that shortage on supply will cause a huge price spike later on.
data/avatar/default/avatar36.webp
yeah thats no joke i got my 16gb corsair kit for 112usd now its at 130+usd
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Seems so slow compared to HBM2, but still a major improvement. Might be worth using for mid-range hardware, whereas HBM is best used in high-end and GDDR5 for everything else.
kaz050:

yeah thats no joke i got my 16gb corsair kit for 112usd now its at 130+usd
My 16GB kit was around $70 when new, and last time I checked it was over $140.
data/avatar/default/avatar23.webp
kaz050:

yeah thats no joke i got my 16gb corsair kit for 112usd now its at 130+usd
I got mine for 75$ now its 135$
https://forums.guru3d.com/data/avatars/m/215/215813.jpg
Got my 32GB DDR4 kit for £179, same kit is now £320
https://forums.guru3d.com/data/avatars/m/237/237771.jpg
schmidtbag:

Seems so slow compared to HBM2, but still a major improvement. Might be worth using for mid-range hardware, whereas HBM is best used in high-end and GDDR5 for everything else.
Yeah no the effective bandwidth of the 1080ti is higher than the vega64 with 11Gb/s chips. 16Gb/s would boost that cards bandwidth by about 46%.
https://forums.guru3d.com/data/avatars/m/270/270041.jpg
Memory seems to be being made at insane rates right now (least the technology of it) we was stuck on GDDR5 for ages same with DDR3, now they announce ddr5 should be out this year or next year and GDDR6 to come out next year with the new cards... hard to keep up with the new tech all coming out 😀 that being said memory costs are insane
https://forums.guru3d.com/data/avatars/m/226/226700.jpg
I am very impressed! Thank you Hilbert for posting this; I am confident you will keep us updated too.
data/avatar/default/avatar40.webp
I just wish we would get an actual proper timeline for this stuff. All technology has a vague release date when really it's just when they go into the final phase of production preparation or something. Or that it is produced in such small quantities that it takes a full year after before we see actual products with the technology in it. If people knew for sure that by the end of q2 next year that we would be able to buy volta with GDDR6 and also DDR5 then people woulf be able to make informed purchases. I know that goes against their interest in selling soon to be obsolete technology but it would make things a lot clearer. DDR4 is at it's most expensive...but if DDR5 is 18 months away before we can actually buy it...and the prices are only going to go up more...then that makes the difference whether to buy it regardless or wait.
https://forums.guru3d.com/data/avatars/m/218/218795.jpg
Is it faster than HBM now?
data/avatar/default/avatar22.webp
schmidtbag:

Seems so slow compared to HBM2, but still a major improvement. Might be worth using for mid-range hardware, whereas HBM is best used in high-end and GDDR5 for everything else.
HBM2 isn't that great, read below.
sunnyp_343:

Is it faster than HBM now?
That depends on the configuration on the card, ie. how wide the memory interface is. A 1080 Ti with GDDR5X 11Gbps uses a 384-bit memory bus for 484GB/s bandwidth. Just replacing that with 16Gbps GDDR6 chips would get us to 704GB/s bandwidth. Now for HBM2, Vega 64 has a memory bandwidth of 484 GB/s (interestingly the exact same value as a 1080 Ti). So basically, GDDR5X 11Gbps could already match HBM2. Of course you could double the HBM2 memory bandwidth by using more stacks, but that costs a lot more money as well. IMO GDDR6 is superior from a simple cost/performance standpoint, and unless you actually use 4 HBM stacks, also in performance alone.
https://forums.guru3d.com/data/avatars/m/197/197287.jpg
sunnyp_343:

Is it faster than HBM now?
Depends on the amount of memory. Where HBM2 shines is a lot of memory, if they made cards with it. Take the Tesla V100, with 16GB of HBM2, at 900GB/s. That's with 4, 4GB stacks. GTX 1080 ti has 11 chips, could do 12 (Titan). If they did a card with that many stacks, the speed could be around 2800~GB/s But that is highly, or i'd even go as far as to say simply, won't happen. Or, if they simply matched the GTX 1080 ti/Titan memory of 11-12GB, the speed would be around 700GB/s Personally, GDDR5/x/6 doesn't excite me. Every time they announce something new, all i see is bread crumbs I mean, GDDR5x is already able to do 14-16Gbps, as stated in this article, and GDDR6 is....going for 16Gbps......i simply do not seen the innovation, aside from efficiency.
https://forums.guru3d.com/data/avatars/m/248/248721.jpg
RavenMaster:

Got my 32GB DDR4 kit for £179, same kit is now £320
Even DDR3 price has sky rocketed for a while, back in February 2013 I've bought my last DDR3's for ridiculously small price, 3 kits of 2x 4GB (24GB total) Patriot Viper 3 series (Black Mamba Edition) 1866MHz PC3-15000 model: PV38G186C9K, 1.5V, CL9... just one month later price has gone some 50% higher here, but now DDR3's with similar specs are insanely expensive here, you got to sell one of your kidneys now to buy 6x 4GB DDR3 1866, at least here where I live.
https://forums.guru3d.com/data/avatars/m/63/63170.jpg
The main difference between HBM and GDDR/X is where the costs are. For HBM, the board itself is much less complex, but the memory costs more to manufacture and implement (interposer). For GDDR/X the board is costly with 384/512bit interfaces. The memory chips themselves are bought 'en masse' and can be binned, and are therefore cheaper than HBM. HBM3 will be a doubling of the speed also iirc. If they can use the small interposer tech from Intel to connect the HBM to the GPU, that would be a lot cheaper for AMD in the long run. edit : I'm not dissing GDDR/X, its fine for the moment. Up to 128bit its the only way to go. HBM is the way forward though. A single stack is equal to 256bit GGDRX, dual stack is equal to 512bit GDDRX. GDDR/X does over clock better though in general.
data/avatar/default/avatar20.webp
HBM2 also has tighter latency vs GDDR5/GDDR6, so isn't always about overall available bandwidth. HBM3 is also been developed. Cards will soon have 32GB VRAM.
https://forums.guru3d.com/data/avatars/m/237/237771.jpg
Evildead666:

The main difference between HBM and GDDR/X is where the costs are. For HBM, the board itself is much less complex, but the memory costs more to manufacture and implement (interposer). For GDDR/X the board is costly with 384/512bit interfaces. The memory chips themselves are bought 'en masse' and can be binned, and are therefore cheaper than HBM. HBM3 will be a doubling of the speed also iirc. If they can use the small interposer tech from Intel to connect the HBM to the GPU, that would be a lot cheaper for AMD in the long run. edit : I'm not dissing GDDR/X, its fine for the moment. Up to 128bit its the only way to go. HBM is the way forward though. A single stack is equal to 256bit GGDRX, dual stack is equal to 512bit GDDRX. GDDR/X does over clock better though in general.
Um????? This 16Gb/s would equal 406Gb bandwidth on a 256 bus. That is almost equal to Vega's 2 stacks of HBM2. I would venture to say the 16Gb/s is to be the standard and would not be surprised if we see variants around 18Gb/s. 16Gb/s would be just over 600Gb of memory bandwidth on a 384bit interface. Like it or not supply and cost will keep GDDR very relevant for years to come. Plus the only perceivable advantage of HBM was not even on display with vega, that card was damn near as big as the 1080ti. In short HBM as of now should be reserved for professional cards and continue to use GDDR for gamer cards, which Nvidia will likely do but AMD will likely not.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Loophole35:

Yeah no the effective bandwidth of the 1080ti is higher than the vega64 with 11Gb/s chips. 16Gb/s would boost that cards bandwidth by about 46%.
Where did I mention Vega? That's right, I didn't. Don't jump to conclusions and start and unnecessary flame war, eh? This isn't an nvidia vs AMD discussion.
https://forums.guru3d.com/data/avatars/m/237/237771.jpg
schmidtbag:

Where did I mention Vega? That's right, I didn't. Don't jump to conclusions and start and unnecessary flame war, eh? This isn't an nvidia vs AMD discussion.
If you want to take that as an attack against Vega or AMD you need to chill. Vega is the latest card with HBM and I was simply using it for comparison. Just pointing out in practice your post has been proven false. The biggest hurdle with HBM is cost. It cost more to equal the same performance of GDDR currently. Read my post right above yours .
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Loophole35:

If you want to take that as an attack against Vega or AMD you need to chill. Vega is the latest card with HBM and I was simply using it for comparison. Just pointing out in practice your post has been proven false. The biggest hurdle with HBM is cost. It cost more to equal the same performance of GDDR currently. Read my post right above yours .
I'm not; I don't care if Vega doesn't really take advantage of it, because that wasn't relevant to my point. HBM isn't an AMD technology (after all, Nvidia is considering using it too), but you're the one who decided to bring them up. I'm talking strictly about numbers, and I wasn't even referring to real-world performance.