Graphics card memory: GDDR5X to make an appearance

Published by

Click here to post a comment for Graphics card memory: GDDR5X to make an appearance on our message forum
https://forums.guru3d.com/data/avatars/m/178/178868.jpg
Good to know. Great for those running 4k resolution or having multiple monitors...
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
This is really stretching the old GDDR5 to its absolute limits. HBM obviously is now at its mere infancy, so the only way for it is up. While I do think this is pretty sweet as it would offer the middle (and perhaps even lower) tier video cards a juicy boost, I still hope HBM# will get the major focus. That's the way to go in the future, so it needs its funding and production volumes to eventually become mainstream.
https://forums.guru3d.com/data/avatars/m/259/259067.jpg
Wait for HBM v2. GDDR5X is the same as GDDR5 in power consumption.Thats why HBM come to the market,to compensate high power consumption of GDDR5.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Wait for HBM v2. GDDR5X is the same as GDDR5 in power consumption.Thats why HBM come to the market,to compensate high power consumption of GDDR5.
Increasing the manufacturing cost of the chip by 2x isn't always worth the ~30w saved by going to HBM. GDDR5X is clearly aimed at mid-range/low end, where bandwidth and power isn't really a concern, but cheaper cards is.
https://forums.guru3d.com/data/avatars/m/186/186805.jpg
Wait for HBM v2. GDDR5X is the same as GDDR5 in power consumption.Thats why HBM come to the market,to compensate high power consumption of GDDR5.
Not just power consumption but also PCB space for much shorter cards, less tracers of the board, less VRM's, basically every single aspect is less of everything in return for insanely high and efficient speeds. I honestly dont blame Nvidia for not going HBM straight away. Its new, expensive to manufacturer, and quite frankly unproven so far even on the FuryX only proving it can be done but the extra speed so far has been limiting. By Nvidia staying with GDDR5 and improving it they will save money, its costs now are pretty low, they know what they are doing, they have fabs in production already and its probably cheaper for the time being whilst also dramatically improving speeds. Nvidia will defo go HBM2 for the high end market though or at least for the Quadro/Tesla cards.
https://forums.guru3d.com/data/avatars/m/252/252888.jpg
Not just power consumption but also PCB space for much shorter cards, less tracers of the board, less VRM's, basically every single aspect is less of everything in return for insanely high and efficient speeds. I honestly dont blame Nvidia for not going HBM straight away. Its new, expensive to manufacturer, and quite frankly unproven so far even on the FuryX only proving it can be done but the extra speed so far has been limiting. By Nvidia staying with GDDR5 and improving it they will save money, its costs now are pretty low, they know what they are doing, they have fabs in production already and its probably cheaper for the time being whilst also dramatically improving speeds. Nvidia will defo go HBM2 for the high end market though or at least for the Quadro/Tesla cards.
Indeed, throughput matters more in high-end server environments. For gamers and the average consumer it really doesn't matter, just take a look the high-end cards where memory OC doesn't do jack sh!t.
data/avatar/default/avatar28.webp
I find this a good strategic move, what would HBM be used for if the card itself doesn't have the powers to push high-resolution anyway? Imagine the GTX950 4GB version, or even the 960 4GB. However, of course it relies on Nvidia's handling of the matters, it would be somewhat of a bad move if they put it on theoretical 1070-1080, for example, because those cards can definitely push higher resolutions with appropriate horsepowers to boot (assumptions). I can definitely see this going on lower-end cards, high-end gaming and workstation cards would definitely requires HBM2. And remember, if the supply is too low, the price is going to be even higher, so GDDR5X may be a good counter-weight to the supply problem.
https://forums.guru3d.com/data/avatars/m/105/105985.jpg
I would rather have this on a high end board so there is no freezing issues and to keep the cost down. if the power draw is the same it should be no issues .hbm? yeah but its too early for much gains from it. shorter boards I could care less about so till it really shows a great boost over gddr5 I can love without it...or maybe amd's next core lays waste to nv's offering then I will try it ,that is if they don't abandon it till it matures
https://forums.guru3d.com/data/avatars/m/156/156133.jpg
Moderator
Looking at the power consumption that this will have, I do not think that bothers me too much. Only because as an enthusiast, I do not care about the power draw of the memory on my video card. Don't get me wrong, I like that the Fury X is smaller and the HBM helps with power consumption but that isn't a priority on how I decide what to buy. Not that there is a huge market for it yet, but I do see this not working in MITX builds too well. That's just me though.
data/avatar/default/avatar08.webp
good news is that GDDR5X should keep HBM prices in check bad news is that it will prevent en masse adoption of HBM, and therefore prevent HBM becoming even more cheaper what?... now im confused
https://forums.guru3d.com/data/avatars/m/248/248627.jpg
This sounds like great news for cheaper low to mid end cards i cant wait to see what 14/16nm cards bring with this tech.:banana:
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
I would rather have this on a high end board so there is no freezing issues and to keep the cost down. if the power draw is the same it should be no issues .hbm? yeah but its too early for much gains from it. shorter boards I could care less about so till it really shows a great boost over gddr5 I can love without it...or maybe amd's next core lays waste to nv's offering then I will try it ,that is if they don't abandon it till it matures
Talking 'bout LN2, are you? 😀 cowie the die hard bencher
https://forums.guru3d.com/data/avatars/m/101/101279.jpg
If you have a card with 16-32 GB memory, can't it just load the whole game into memory and stream from there?
https://forums.guru3d.com/data/avatars/m/38/38428.jpg
There might be a possibility here for nVidia and AMD to offer this memory at lower speeds/voltages than regular GDDR5 memory so as to lower power consumption. And at lower speeds they might be able to save on the complexity of the memory controller. IIRC AMD used a simpler controller on the HD 6870 and was able to save money by doing so. So, there might be an opportunity here for power savings, cost of production savings, and while still offering more bandwidth that a 980 Ti while using a 256 bit memory bus. That would be tremendously good news for the manufacture of even high end cards in the short term, and the medium term. The big caveat being the cost of this memory. If this memory isn't much more than the cost of regular GDDR5 then I'll see it possible that HBM might be getting reserved for just the flagship cards. Again, price of the updated HBM modules will be a huge factor. So will availability. I just see any release of GDDR5X memory as suggesting that its release is in response to strong demand. Well, however things break, the outlook has just improved. Either HBM will be affordable, and awesome, and available, in the not too distant future, or we'll have a very good alternative too see us through to the time when HBM is affordable and common place. This was good news. 🙂
data/avatar/default/avatar07.webp
If you have a card with 16-32 GB memory, can't it just load the whole game into memory and stream from there?
Not exactly. While a game might take up 20GB on disk, the data there is compressed and indexed for storage purposes. When actually running the game, all the textures and geometry must be fed to the GPU in a format that the GPU can understand. There is also a lot of repetition that happens behind the scenes, for example hardware-generated mipmaps of a texture will inflate the memory usage as well. When also considering the buffers necessary at every stage in the render pipeline, a few hundred megabytes of textures and geometry on disk could translate to over a gig of VRAM use when running. As storage gets faster, cheaper, and more unified with the rest of the system memory space (memristors?), it will make more sense to store assets in a way that the GPU can directly consume.
https://forums.guru3d.com/data/avatars/m/239/239175.jpg
Do these help with the stutter you get if you play at 4K *and* enable anti-aliasing? Some indie games that don't use much resources start to stutter when doing that, even though GPU and CPU utilization isn't even near 100%.