Upcoming Geforce GTX Volta cards Use GDDR5X not HBM2

Published by

Click here to post a comment for Upcoming Geforce GTX Volta cards Use GDDR5X not HBM2 on our message forum
https://forums.guru3d.com/data/avatars/m/156/156133.jpg
Moderator
It's probably cheaper, plus AMD has been working with HBM for awhile we know. Nvidia still is probably getting their feet wet with it. Kind of reminds me of a certain war between two optical storage formats just a few years back...
https://forums.guru3d.com/data/avatars/m/269/269912.jpg
Smart business plan: Make best product possible, with the cheapest products available. What do I care what memory is being used if it works, doesn't cost me my second born and my favorite dog?:approval:
data/avatar/default/avatar37.webp
I was most think that Volta will use GDDR6 ( on consumer gaming parts ), not GDDR5x... Anyway, we should certainly see GV104 first, and if they want release them this year, GDDR5x is the only solution. ( an with same rate speed than today, as i dont think they could make evolve GDDR5x much higher )
data/avatar/default/avatar06.webp
HBM doesn't provide any real benefits to consumer products while costing a kidney and causing your product to be delayed for more than a year, so no surprises Nvidia is not rushing to use it. BTW, this "bandwidth" advantage of HBM is a myth. True it HAS advantage in 1-chip-vs-1-chip design, but we need to look at the whole package (and consider if such bandwidth is actually needed at all...). Just look at 1080 TI with GDDR5X having more memory bandwidth than unreleased Vega with HBM2.
https://forums.guru3d.com/data/avatars/m/239/239175.jpg
Since 4K will obviously be the target for the upcoming Volta cards, NVidia had better optimize memory I/O even further if they stick to GDDR. MSAA seems to be dead, so they will probably get away with this just fine.
https://forums.guru3d.com/data/avatars/m/254/254725.jpg
Since 4K will obviously be the target for the upcoming Volta cards, NVidia had better optimize memory I/O even further if they stick to GDDR. MSAA seems to be dead, so they will probably get away with this just fine.
I wish FXAA would go find a quiet corner to die in.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Trust me it has benefits that Volta just will not have. https://www.youtube.com/watch?v=85ProuqAof0
Increased framerates when you run out of VRAM is not something I'd consider must have - just give the card enough VRAM in the first place. Regardless, I think everyone knows HBM2 has benefits, the question is whether those benefits is worth the increased cost, potential delays, etc that seem to stem from using it. In gaming, I personally don't see the value.
https://forums.guru3d.com/data/avatars/m/55/55855.jpg
GeForce cards are their gaming cards, so don't need it, they only use HBM on their data centre cards (Teslas), where it is.
https://forums.guru3d.com/data/avatars/m/175/175902.jpg
GeForce cards are their gaming cards, so don't need it, they only use HBM on their data centre cards (Teslas), where it is.
Agree, the HBM tesla is on request and cost... way too much for a gaming GPU. 🙂 Also aviability of HBM is an issue (for both NVidia and AMD despite this one is the 1st on the list) and so the price. Sometime technical evolution is hard 😛c1: ...
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
Increased framerates when you run out of VRAM is not something I'd consider must have - just give the card enough VRAM in the first place. Regardless, I think everyone knows HBM2 has benefits, the question is whether those benefits is worth the increased cost, potential delays, etc that seem to stem from using it. In gaming, I personally don't see the value.
I believe that this kind of design will also have an impact whenever there is VRAM<->RAM communication, and not only when you run out of VRAM. That could have a lot of interesting implications on microstutter across the board.
https://forums.guru3d.com/data/avatars/m/175/175902.jpg
It's probably cheaper, plus AMD has been working with HBM for awhile we know. Nvidia still is probably getting their feet wet with it. Kind of reminds me of a certain war between two optical storage formats just a few years back...
AMD have only a few advantage on it, NVidia have worked on HBM the 1st and decided to not use it, and worked with HBM2 (well... to reduce cost they need to make it sold to any one 🙂 ) About optical we should remind that both get obsolete and never sold much (one is dead, and other is nearly dead: draw game) outsider like cloud and VOD rise and win.
https://forums.guru3d.com/data/avatars/m/239/239175.jpg
TXAA is the future
Nope. TXAA is MSAA with temporal filters on top. Games that can't support MSAA also can't support TXAA. If MSAA is dead, so is TXAA. Maybe you meant to say TAA, which is post-process temporal AA. It's not bad at 4K. It is a bit blurry, but at this point, it's the only thing that gets rid of shimmering, so I'll take it.
https://forums.guru3d.com/data/avatars/m/250/250418.jpg
Nvidia still is probably getting their feet wet with it.
I'm a fan of Ryzen but Vega is 6 months late to be relevant. Nvidia has HBM products on their top products for companies, their consumer GPU's don't use them because there's no point on thinning the margin of gains with more expensive tech when you can have the same performance for cheap. HBM have potential to be great, but right now is dragging AMD GPU division down real time!
Increased framerates when you run out of VRAM is not something I'd consider must have - just give the card enough VRAM in the first place.
Wrong, because adding more ram is expensive and if you can lift the physical limit for developers with intelligent controllers and software, everyone can benefit from cheaper products and more features!
data/avatar/default/avatar39.webp
I believe that this kind of design will also have an impact whenever there is VRAM<->RAM communication, and not only when you run out of VRAM. That could have a lot of interesting implications on microstutter across the board.
Actually, this not much about how much Vram are there ( whatever you have 4-8-16GB ), but much about reduce the footprint of memory on Vram ( meaning be able to store more data and more efficiently as today )... ( a bit like compression today, but more efficiently ).. If it work on 4gb gpus it will work too on 8-16GB gpus, just performance benefit will not be same.
AMD have only a few advantage on it, NVidia have worked on HBM the 1st and decided to not use it, and worked with HBM2 (well... to reduce cost they need to make it sold to any one 🙂 ) .
Huum not sure where you got that... AMD have start working on developping HBM way longer than you seems think. In fact their patent and research paper was way before it was called HBM. On the other hand, Nvidia was showing interest on HMC , but was not even at start part of the HMCC consortium ( Micron, Samsung, Xiling, Altera etc ).. ( And if their member list is too believed, they are still not or no more part of it ) HMC and HBM are 2 different things. HMC https://en.wikipedia.org/wiki/Hybrid_Memory_Cube HMC consortium http://www.hybridmemorycube.org/about.html HBM https://en.wikipedia.org/wiki/High_Bandwidth_Memory
The development of High Bandwidth Memory began at AMD in 2008]to solve the problem of ever increasing power usage and form factor of computer memory. Amongst other things AMD developed procedures to solve the die stacking problems with a team led by Senior AMD Fellow Bryan Black. Partners from the memory industry (SK Hynix), interposer industry (UMC) and packaging industry (Amkor Technology and ASE) were obtained to help AMD realize their vision of HBM.[17] High volume manufacturing began at a Hynix facility in Icheon, Korea in 2015. HBM has been adopted as industry standard JESD235 by JEDEC as of October 2013 following a proposal by AMD and SK Hynix in 2010.[4] The first chip utilizing HBM is AMD Fiji which was released in June 2015 powering the AMD Radeon R9 Fury X.[18][2][19] HBM2 was accepted by JEDEC as standard JESD235a in January 2016.[5] The first GPU chip utilizing HBM2 is the Nvidia Tesla P100 which was officially announced in April 2016.[20][21]
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
Increased framerates when you run out of VRAM is not something I'd consider must have - just give the card enough VRAM in the first place. Regardless, I think everyone knows HBM2 has benefits, the question is whether those benefits is worth the increased cost, potential delays, etc that seem to stem from using it. In gaming, I personally don't see the value.
Those benefits are worth it 100% for mobile devices and anything where size and power consumption matters.
https://forums.guru3d.com/data/avatars/m/270/270008.jpg
Probably a smart move on Nvidia's part but also on AMD's to use HBM2. If you look at what Vega is and what Navi is planned to be, the HBM2 plays a big role in there plans so AMD has to push forward with it. Vega is a SOC with there infinity fabric implemented. I very much expect with the 7nm Navi planned for next year we will see AMD take a Ryzen like approach and start "gluing" GPU's together to function as one unit with no need of crossfire and the driver profiles that go with it. They will have very high bandwidth needs when they start sharing the HBM2 across multiple GPU's. I'm not certain but pretty sure this is why they designed Vega the way they did so its a stepping stone to a die shrink and use the infinity fabric to tie more than one GPU together, well and also to make APU's easier.
data/avatar/default/avatar28.webp
So August launch for RX Vega reference models, which means probably September for AIB models. The same month when Nvidia will Blitz-release the GTX 2080/70 to piss yet again on AMDs parade. Gotcha.
data/avatar/default/avatar26.webp
Nvidia.. after shaving us by decades, have enough money, to buil their own fabs, and licenses for mass producing their own HBM tech 😀 That was fully expected, that Nvidia will not hurry for HBM. Was also mentioned that GDDR5X can run 16Gbps... That will be next nVidia step for gamming GPU's. They chose to reduce cost in memmory, but also can put plenty VRAM chips for relativly good price.
Since 4K will obviously be the target for the upcoming Volta cards, NVidia had better optimize memory I/O even further if they stick to GDDR. MSAA seems to be dead, so they will probably get away with this just fine.
MSAA is not dead. Engines simply still use it. 4K or SSAA dosen't give smoth edges so cheap/fast as MSAA do.
Those benefits are worth it 100% for mobile devices and anything where size and power consumption matters.
But for PC's mostly pure performance have matter.
https://forums.guru3d.com/data/avatars/m/260/260855.jpg
Maybe you meant to say TAA, which is post-process temporal AA. It's not bad at 4K. It is a bit blurry, but at this point, it's the only thing that gets rid of shimmering, so I'll take it.
TAA is what they use at the top end of Doom? That's probably the best AA I've ever seen in a game. It got rid of all the jaggies and shimmer, with almost no performance impact. I was very impressed with it.