Upcoming Geforce GTX Volta cards Use GDDR5X not HBM2

Published by

Click here to post a comment for Upcoming Geforce GTX Volta cards Use GDDR5X not HBM2 on our message forum
https://forums.guru3d.com/data/avatars/m/270/270041.jpg
Nope. TXAA is MSAA with temporal filters on top. Games that can't support MSAA also can't support TXAA. If MSAA is dead, so is TXAA. Maybe you meant to say TAA, which is post-process temporal AA. It's not bad at 4K. It is a bit blurry, but at this point, it's the only thing that gets rid of shimmering, so I'll take it.
Really when we start to push 4/5/8k in games in the future we wont need any more AA, it made a lot more sense in games that were 1080p and below when the edges were still rough and more blocky. These days even me running games at 1440p, i find that AA isn't that needed and with the future going towards higher resolutions suck as 4k+ i think AA might just stop being a thing, or at least something that is not needed as much
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
Probably a smart move on Nvidia's part but also on AMD's to use HBM2. If you look at what Vega is and what Navi is planned to be, the HBM2 plays a big role in there plans so AMD has to push forward with it. Vega is a SOC with there infinity fabric implemented. I very much expect with the 7nm Navi planned for next year we will see AMD take a Ryzen like approach and start "gluing" GPU's together to function as one unit with no need of crossfire and the driver profiles that go with it. They will have very high bandwidth needs when they start sharing the HBM2 across multiple GPU's. I'm not certain but pretty sure this is why they designed Vega the way they did so its a stepping stone to a die shrink and use the infinity fabric to tie more than one GPU together, well and also to make APU's easier.
This is most likely what's going to happen, and I can also see a Ryzen-level upset happening in the desktop with Navi, but that will have to wait at least until 2019. If we talk about the now, the memory controller will probably cause much less microstutter in open world games even when not pressed for VRAM. We'll have to wait and see though.
data/avatar/default/avatar23.webp
Really when we start to push 4/5/8k in games in the future we wont need any more AA, it made a lot more sense in games that were 1080p and below when the edges were still rough and more blocky. These days even me running games at 1440p, i find that AA isn't that needed and with the future going towards higher resolutions suck as 4k+ i think AA might just stop being a thing, or at least something that is not needed as much
Screen size have big matter. Even at 4K+ AA is good to have. There is 4K 27", and there is 4K 65". Well I can agree 27" will dn't need any type AA. But same rule will not work for (lets say) 50" coz there are benefits using 2/4 MSAA. 8K its extreeme sport right now 🙂
https://forums.guru3d.com/data/avatars/m/270/270041.jpg
Screen size have big matter. Even at 4K+ AA is good to have. There is 4K 27", and there is 4K 65". Well I can agree 27" will dn't need any type AA. But same rule will not work for (lets say) 50" coz there are benefits using 2/4 MSAA. 8K its extreeme sport right now 🙂
Yes i agree, bigger screens might still take use of it. but i don't think many PC gamers run above 32" on there gaming screens... or if they do they are often at sofa distance making it the same difference
https://forums.guru3d.com/data/avatars/m/255/255012.jpg
4k ssaa ftw 😀
https://forums.guru3d.com/data/avatars/m/225/225084.jpg
I was under the impression that 4K doesn't really need much AA. I'm sticking with 1080p/1440p for at least another 2 years anyway so doesn't matter for me. If i was Nvidia i'd wait for GDDR6 for Geforce Volta cards, especially the xx80/xx70. The rest i don't care about if they have GDDR5/X but for Volta enthusiast cards they should use GDDR6. Why? because i believe GDDR6 will be a good selling point.
https://forums.guru3d.com/data/avatars/m/260/260855.jpg
If i was Nvidia i'd wait for GDDR6 for Geforce Volta cards, especially the xx80/xx70. The rest i don't care about if they have GDDR5/X but for Volta enthusiast cards they should use GDDR6. Why? because i believe GDDR6 will be a good selling point.
All anyone cares about is frame rates. If the reviews show a big jump in frame rate while temps remain manageable, that's all that matters. Only a handful of people care about the talking points, but they'll buy the new products anyway. They just need a similar step up in power over Pascal, like they did with the last two cycles. If they can keep up expectations, they'll continue to own the market.
data/avatar/default/avatar24.webp
I was under the impression that 4K doesn't really need much AA. I'm sticking with 1080p/1440p for at least another 2 years anyway so doesn't matter for me. If i was Nvidia i'd wait for GDDR6 for Geforce Volta cards, especially the xx80/xx70. The rest i don't care about if they have GDDR5/X but for Volta enthusiast cards they should use GDDR6. Why? because i believe GDDR6 will be a good selling point.
This will be a good selling point for TI / Titan" version ( and updated version as today with xx80-70 ).. . And indeed, if they can have GV104 parts out before new GPUs of AMD is coming.. ( Vega20 etc )...
https://forums.guru3d.com/data/avatars/m/115/115710.jpg
It's probably cheaper, plus AMD has been working with HBM for awhile we know. Nvidia still is probably getting their feet wet with it. Kind of reminds me of a certain war between two optical storage formats just a few years back...
NVIDIA have had HBM2 cards for a while now. Currently its very clear that it doesn't make much sense to use it on consumer cards. It's not worth it just yet.
https://forums.guru3d.com/data/avatars/m/240/240526.jpg
Since 4K will obviously be the target for the upcoming Volta cards, NVidia had better optimize memory I/O even further if they stick to GDDR. MSAA seems to be dead, so they will probably get away with this just fine.
msaa by itself is useless with modern rendering.. Unless it's used as part of a bigger technique, then it should stay dead.
I wish FXAA would go find a quiet corner to die in.
At least FXAA doesn't introduce horrible artifacts into the image (or smear it into oblivion as soon as there is movement. FXAA has a slight softness, most TAA flat out makes the image smudgy without a ton of SSAA on top)like many TAA solutions do. With oversampling(SSAA/downsampling) the negative effects of FXAA are completely invisible. SMAA 1x does not really do a better job than FXAA at 1x res aside from a sharpness advantage. It is equally as bad at subpixel information. And again with SSAA on top, there is almost no difference between the two.
https://forums.guru3d.com/data/avatars/m/56/56686.jpg
meh i though there we would see some date for next gen cards by now....
https://forums.guru3d.com/data/avatars/m/93/93080.jpg
No point in using anything but GDDR5X. May as well save money and costs for everybody including Nvidia. If the memory is fast enough for the next series....may as well keep it. Just because something doesn't say GDDR6 on the next card does not mean it wont be worth while.
https://forums.guru3d.com/data/avatars/m/201/201182.jpg
I'm happy with whatever memory technology nvidia runs with provided it doesn't bottleneck the rest of the subsystem.
data/avatar/default/avatar02.webp
No point in using anything but GDDR5X. May as well save money and costs for everybody including Nvidia. If the memory is fast enough for the next series....may as well keep it. Just because something doesn't say GDDR6 on the next card does not mean it wont be worth while.
Can ask us why they use them on the high end professional sku so .... Obviously for Nvidia, the choice was cost ( their marge ) and surely availability (secure ) ..
https://forums.guru3d.com/data/avatars/m/265/265607.jpg
Yeah, the AMD put themselves in the corner with HBM. On one hand it's a great technology, but it still requires time to bring down cost. On the other hand, if they as the creator, ditched it all toghether, I really doubt anyone would ever cared for it. That's also why they don't collect royalties on it, even though they own most of the patents for it. Also, I don't think that AMD has enough money to split their chip design for two different types of memory with different MC. So it will probably be a few years untill the AMD will really see a benefit of HBM, at least with gaming cards.
data/avatar/default/avatar37.webp
Yeah, the AMD put themselves in the corner with HBM. On one hand it's a great technology, but it still requires time to bring down cost. On the other hand, if they as the creator, ditched it all toghether, I really doubt anyone would ever cared for it. That's also why they don't collect royalties on it, even though they own most of the patents for it. Also, I don't think that AMD has enough money to split their chip design for two different types of memory with different MC. So it will probably be a few years untill the AMD will really see a benefit of HBM, at least with gaming cards.
Memory has to get closer to the cores to get the pJ/bit down. AMD is ahead of the curve with HBM, even though using it isnt strictly necessary for some GPUs. Nvidia knows that, which is why their P100, V100 and Quadro GP100 all use it. Its also why every pre-exascale supercomputer architecture i can think of uses some form of advanced memory, be it HMC or HBM. Processor in memory(PIM) is also taking off.
https://forums.guru3d.com/data/avatars/m/265/265607.jpg
Memory has to get closer to the cores to get the pJ/bit down. AMD is ahead of the curve with HBM, even though using it isnt strictly necessary for some GPUs. Nvidia knows that, which is why their P100, V100 and Quadro GP100 all use it. Its also why every pre-exascale supercomputer architecture i can think of uses some form of advanced memory, be it HMC or HBM. Processor in memory(PIM) is also taking off.
I fully agree, my point is, that AMD doesn't have money to split their design for compute and gaming gpus and they don't want to ditch the HBM because it would never take off otherwise.
data/avatar/default/avatar37.webp
Given that nvidia have been smashing amd in the performance department for about 15 years straight...I don't think this will be a problem. If the price to performance ratio is off then I will wait longer to upgrade but given that I am gaming on at 4k with a 1080ti at 60 frames in almost all games, I don't think the memory bandwidth will be a problem...although it clearly resonds better to overclocking performance incease than core clocks.
https://forums.guru3d.com/data/avatars/m/132/132389.jpg
Increased framerates when you run out of VRAM is not something I'd consider must have - just give the card enough VRAM in the first place. Regardless, I think everyone knows HBM2 has benefits, the question is whether those benefits is worth the increased cost, potential delays, etc that seem to stem from using it. In gaming, I personally don't see the value.
Agreed. How often do we actually run out of VRAM? Even sitting on an ancient card that has 3.5GB-4GB of available RAM, the VRAM amount is rarely ever a limiting factor, the brute force of the GPU core is usually the issue.
Given that nvidia have been smashing amd in the performance department for about 15 years straight...
That's definitely not what I remember. One of us has memory issues. :wanker:
https://forums.guru3d.com/data/avatars/m/254/254725.jpg
msaa by itself is useless with modern rendering.. Unless it's used as part of a bigger technique, then it should stay dead. At least FXAA doesn't introduce horrible artifacts into the image (or smear it into oblivion as soon as there is movement. FXAA has a slight softness, most TAA flat out makes the image smudgy without a ton of SSAA on top)like many TAA solutions do. With oversampling(SSAA/downsampling) the negative effects of FXAA are completely invisible. SMAA 1x does not really do a better job than FXAA at 1x res aside from a sharpness advantage. It is equally as bad at subpixel information. And again with SSAA on top, there is almost no difference between the two.
My experience with FXAA (and everything other than MSAA or SSAA) has been that I'd rather run without it. If I was going to run SSAA I'd not bother with FXAA 🤓.
Given that nvidia have been smashing amd in the performance department for about 15 years straight...I don't think this will be a problem. If the price to performance ratio is off then I will wait longer to upgrade but given that I am gaming on at 4k with a 1080ti at 60 frames in almost all games, I don't think the memory bandwidth will be a problem...although it clearly resonds better to overclocking performance incease than core clocks.
What now :3eyes:? They've held the crown for a while now but it's not been a 15 year run.