AMD Working on GDDR6 DRAM Controller

Published by

Click here to post a comment for AMD Working on GDDR6 DRAM Controller on our message forum
data/avatar/default/avatar24.webp
This could be Big News!
data/avatar/default/avatar20.webp
Ram is important, but we need proper gpu juice to cope with. Resolution up to 8k is already smooth with that kind of bandwidth. I can't say the same about raw power of processing unit. 🙁 Some times progression of technology is not equal to all aspects.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
warlord:

Ram is important, but we need proper gpu juice to cope with. Resolution up to 8k is already smooth with that kind of bandwidth. I can't say the same about raw power of processing unit. 🙁 Some times progression of technology is not equal to all aspects.
I wrote this in the other GDDR6 thread - faster RAM can lead to faster GPU indirectly. Previously an AMD card may have required a 256bit bus to hit x bandwidth, but with GDDR6 they may only need 128bit bus to hit that same x. This leads to reduced power consumption, which then can be used for faster clocks and it reduces the die size taken up by the memory controller - which potentially means you can stuff more cores in the same size chip.
data/avatar/default/avatar37.webp
Denial:

I wrote this in the other GDDR6 thread - faster RAM can lead to faster GPU indirectly. Previously an AMD card may have required a 256bit bus to hit x bandwidth, but with GDDR6 they may only need 128bit bus to hit that same x. This leads to reduced power consumption, which then can be used for faster clocks and it reduces the die size taken up by the memory controller - which potentially means you can stuff more cores in the same size chip.
I agree technically with how you place it, but think it as that mate: We have a motorway or a big wide road -> translates into bus depth (bits) The quality of this road, surface, asphalt, safety precautions etc -> translates into bandwidth and ram generation But in the end of the day, the most game changing factor is the kind of car, the year of it (model) and most of all its horsepower and/or capabilities. (GPU core for this subject) All in all, I believe we firstly should have the demand for a GPU core that surely needs that kind of support, rather than paying for dram manufacturers innovation and their new products for testing. The whole GPU's hardware should be scaling equally as the years passing by.
https://forums.guru3d.com/data/avatars/m/250/250418.jpg
Denial:

I wrote this in the other GDDR6 thread - faster RAM can lead to faster GPU indirectly. Previously an AMD card may have required a 256bit bus to hit x bandwidth, but with GDDR6 they may only need 128bit bus to hit that same x. This leads to reduced power consumption, which then can be used for faster clocks and it reduces the die size taken up by the memory controller - which potentially means you can stuff more cores in the same size chip.
Plus: smaller bus equals to cheaper cards.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
HBM is great, practical, and arguably necessary, but only for compute tasks. I recall people here saying that Vega doesn't take advantage of HBM and that is simply false, when doing certain GPGPU tasks. Pretty much every time a Vega 64 outperforms a 1080Ti, that's because HBM kicked in. HBM is overkill and a needless expense for gaming purposes. I seriously hope both AMD and Nvidia commit to HBM for FirePro/Quadro/Titan GPUs, but for everyone else's sake, GDDR6 is obviously a better choice. Until HBM can be mass-produced affordably, I don't want to see it on consumer/gamer GPUs.
https://forums.guru3d.com/data/avatars/m/271/271903.jpg
schmidtbag:

HBM is great, practical, and arguably necessary, but only for compute tasks. I recall people here saying that Vega doesn't take advantage of HBM and that is simply false, when doing certain GPGPU tasks. Pretty much every time a Vega 64 outperforms a 1080Ti, that's because HBM kicked in. HBM is overkill and a needless expense for gaming purposes. I seriously hope both AMD and Nvidia commit to HBM for FirePro/Quadro/Titan GPUs, but for everyone else's sake, GDDR6 is obviously a better choice. Until HBM can be mass-produced affordably, I don't want to see it on consumer/gamer GPUs.
Power efficiency is second big HBM gain, you simply can not go against physics . HBM is stacked and very close to GPU so consumes much less power then classical memory. I have seen estimates on internet that says that if AMD had gone GDDR5 route it would cost them 100W of power more in comparison to HBM2
https://forums.guru3d.com/data/avatars/m/248/248627.jpg
AMD's problem in my opinion is they decided to go full retard on the gpu core a 3096 shader gpu with higher clocks would have probably been more efficient and probably performed the same while keeping costs down.
https://forums.guru3d.com/data/avatars/m/271/271903.jpg
icedman:

AMD's problem in my opinion is they decided to go full retard on the gpu core a 3096 shader gpu with higher clocks would have probably been more efficient and probably performed the same while keeping costs down.
From all the stuff and all the website that i visit they all come to same conclusion that AMD : A) doesn't have resources to develop 2 (or more)different arch's, one purely for graphics and second for compute ,so they develop one jack of all trades that is not very efficient B) Process node. GloFo 14nm that has been co-developed with Samsung (Samsung was in charge) is optimized for low power low clocks (phones).And that process is great with power if you stay with in it's limits, and if you go over bye bye game over, power draw hit the roof C) what you have said, if they (AMD) would lover the clocks then VEGA would be great power wise (you have bunch of examples and revives on line where they tried to under clock and under volt Vega and results are fantastic)
https://forums.guru3d.com/data/avatars/m/225/225084.jpg
Whats also interesting on that page is DDR5 in 2018 but by the look of it it doesn't start to get any faster till 2020. Really DDR5 already? Will that mean new ram sockets will be needed or will current DDR4 Dimms still work with DDR5?
https://forums.guru3d.com/data/avatars/m/266/266726.jpg
Known for quite some time, references to Gddr6 have been present in the adl libraries for almost 2 years now I will say , the implementation of the infinity fabric on amd's gpus, should make future gpu development faster and cheaper, they can basically just cut and paste various blocks(like memory controllers) now.
https://forums.guru3d.com/data/avatars/m/196/196284.jpg
warlord:

I agree technically with how you place it, but think it as that mate: We have a motorway or a big wide road -> translates into bus depth (bits) The quality of this road, surface, asphalt, safety precautions etc -> translates into bandwidth and ram generation But in the end of the day, the most game changing factor is the kind of car, the year of it (model) and most of all its horsepower and/or capabilities. (GPU core for this subject) All in all, I believe we firstly should have the demand for a GPU core that surely needs that kind of support, rather than paying for dram manufacturers innovation and their new products for testing. The whole GPU's hardware should be scaling equally as the years passing by.
It's better to have memory that is faster than what is required by the GPU, than to have a GPU that requires data faster than the memory can handle it. In other words, the GPU itself should be the sole bottleneck of a graphics card, never the memory. If the memory is the bottleneck, the engineers screwed up. Expecting GPU makers and memory makers to develop products that perfectly compliment each other is insane and would increase prices to the point that the dedicated graphics market would almost completely collapse.
https://forums.guru3d.com/data/avatars/m/132/132389.jpg
Gee, maybe they'll actually sell video cards now instead of just saying "we don't make enough profit on HBM cards" and selling nothing.
warlord:

Ram is important, but we need proper gpu juice to cope with. Resolution up to 8k is already smooth with that kind of bandwidth. I can't say the same about raw power of processing unit. 🙁 Some times progression of technology is not equal to all aspects.
It's about production cost. They don't want to go back to GDDR5 for their planned flagships, and they don't want to pay for HBM. They gambled on being able to produce it at lower costs by now and failed. In turn we all got screwed for it, as if the GPU market wasn't abysmal enough.
https://forums.guru3d.com/data/avatars/m/268/268759.jpg
The weird thing is that now AMD will use GDDR and Nvidia HBM2 :V so WTF!
https://forums.guru3d.com/data/avatars/m/237/237771.jpg
Silva:

Plus: smaller bus equals to cheaper cards.
I remember all the "experts" on this and other forums scoffing at 256bit cards. With GDDR6 you could have 780Ti bandwidth on 128bit memory bus.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Amx85:

The weird thing is that now AMD will use GDDR and Nvidia HBM2 :V so WTF!
Both companies have and will continue to use both... HBM2 doesn't make sense on budget cards, GDDR doesn't make sense on compute cards. The only reason why Vega has HBM2 is because AMD can't afford to simultaneously develop so many variants like Nvidia can.
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
schmidtbag:

HBM is great, practical, and arguably necessary, but only for compute tasks. I recall people here saying that Vega doesn't take advantage of HBM and that is simply false, when doing certain GPGPU tasks. Pretty much every time a Vega 64 outperforms a 1080Ti, that's because HBM kicked in. HBM is overkill and a needless expense for gaming purposes. I seriously hope both AMD and Nvidia commit to HBM for FirePro/Quadro/Titan GPUs, but for everyone else's sake, GDDR6 is obviously a better choice. Until HBM can be mass-produced affordably, I don't want to see it on consumer/gamer GPUs.
Denial:

Both companies have and will continue to use both... HBM2 doesn't make sense on budget cards, GDDR doesn't make sense on compute cards. The only reason why Vega has HBM2 is because AMD can't afford to simultaneously develop so many variants like Nvidia can.
I wonder what is the effect of HBM for multi chip configurations like Navi. Maybe it does make a lot of sense there, and it will end up actually enabling cheaper GPUs.