Could Nvidia be prepping a Volta release?

Published by

Click here to post a comment for Could Nvidia be prepping a Volta release? on our message forum
https://forums.guru3d.com/data/avatars/m/245/245459.jpg
Which is a good OC from reference spec but definitely not on the level you can acheive when overclocking a 980ti properly (I mean a 980ti can still be overclocked to a level that beats a maxed out 1070 basically), but most of the gains for Pascal is from the memory OC really. I run mine at 2050mhz core and 11ghz memory which is just fine. Still I feel that Pascal has terrible IPC, a 980 ti at 2.1ghz would blow its face off if it was possible on air
Yeah, if I had a 1500Mhz 980ti I think I'd stay on it until the next generation (as in Volta).
https://forums.guru3d.com/data/avatars/m/267/267787.jpg
Lol! Well if this is true the hey for all the AMD guys this is a good thing, as Vega will by no means be competitive against Volta. That means that Vega will be the new midrange king af the hill and AMD will have for the 2nd year in a row have nothing against Nvidia in the enthusiast class.
https://forums.guru3d.com/data/avatars/m/63/63215.jpg
If this was a few years back, I wouldn't have been suprised to see Volta 3months after GTX1080Ti was released. However, how soon consumer Volta comes will entirely depend on how good Vega is. If Vega is competitive, then, we will see price-cuts on the current line-up first. If Vega is significantly better, then, Volta will be fall 2017...latest.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
Aside from Volta being released or not. Does anyone remember than nVidia needed historically less memory bandwidth to feed as powerful GPU as AMD? What kind of Powerful GPU needs 6 x 250GB/s (1500GB/s) bandwidth on nVidia's side? Just saying that image likely shows mentioned GV100. And that it is not going to be gaming class GPU.
data/avatar/default/avatar33.webp
Aside from Volta being released or not. Does anyone remember than nVidia needed historically less memory bandwidth to feed as powerful GPU as AMD? What kind of Powerful GPU needs 6 x 250GB/s (1500GB/s) bandwidth on nVidia's side? Just saying that image likely shows mentioned GV100. And that it is not going to be gaming class GPU.
GV100 should be around 900GB/s The whole point of new arch is increased perf/Watt. You wont get there if you squander half your power budget on bajilion GB/s of HBM2.
data/avatar/default/avatar27.webp
Really, as much as Ryzen is a done deal for me, on the GPU side i think it's gonna be a tough call to make this year. I feel like upgrading my 5820K+980-Ti this year
why would u do that, especially your cpu, even your gpu still high end
https://forums.guru3d.com/data/avatars/m/218/218363.jpg
Launch 2017, what? I thought it was due to spring 2018?
https://forums.guru3d.com/data/avatars/m/211/211933.jpg
Oh man, can't wait for 1180 ti to release!!1one
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
GV100 should be around 900GB/s The whole point of new arch is increased perf/Watt. You wont get there if you squander half your power budget on bajilion GB/s of HBM2.
We actually have no clue on how they plan to get increased performance/watt over an architecture that is already pretty damn efficient in the first place. Maybe it's the bajilion GB/sec of HBM 2 that does that, maybe not. For all purposes though, an HBM card with something like 32GB of memory in four stacks, is going to be using much less power than the GDDR5x equivalent.
data/avatar/default/avatar38.webp
We actually have no clue on how they plan to get increased performance/watt over an architecture that is already pretty damn efficient in the first place. Maybe it's the bajilion GB/sec of HBM 2 that does that, maybe not.
That most certainly isn't. https://abload.de/img/nvidia-hbm-memory-cri9ipva.jpg
data/avatar/default/avatar39.webp
Yeah, if I had a 1500Mhz 980ti I think I'd stay on it until the next generation (as in Volta).
I upgraded from dual GTX980's (which is far more powerful than a single 980ti) to a single pascal Titan X. And the single Titan X blows away my 980SLI setup. A single Titan X is around 3x more poweful than a single 980. Which puts it at around 2x a single 980ti. The card is a BEAST and the 1080ti won't be far behind performance wise. So YES the 1080ti will be a VERY descent upgrade from the 980ti. And in which world does a 1600MHz to 2100MHz register as a crappy OC? It allows me to play 95% of current AAA titles in 4K Ultra settings. I think you need to consider your statemrnts more before posting. Speaking as someone who ACTUALLY owns the hardware i think my statements are valid.
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
I upgraded from dual GTX980's (which is far more powerful than a single 980ti) to a single pascal Titan X. And the single Titan X blows away my 980SLI setup. A single Titan X is around 3x more poweful than a single 980. Which puts it at around 2x a single 980ti. The card is a BEAST and the 1080ti won't be far behind performance wise. So YES the 1080ti will be a VERY descent upgrade from the 980ti. And in which world does a 1600MHz to 2100MHz register as a crappy OC? It allows me to play 95% of current AAA titles in 4K Ultra settings. I think you need to consider your statemrnts more before posting. Speaking as someone who ACTUALLY owns the hardware i think my statements are valid.
The sore point about the Titan XP is the obviously inflated price that won't even give you extra compute features, not the performance.
https://forums.guru3d.com/data/avatars/m/231/231931.jpg
I upgraded from dual GTX980's (which is far more powerful than a single 980ti) to a single pascal Titan X. And the single Titan X blows away my 980SLI setup. A single Titan X is around 3x more poweful than a single 980. Which puts it at around 2x a single 980ti. The card is a BEAST and the 1080ti won't be far behind performance wise. So YES the 1080ti will be a VERY descent upgrade from the 980ti. And in which world does a 1600MHz to 2100MHz register as a crappy OC? It allows me to play 95% of current AAA titles in 4K Ultra settings. I think you need to consider your statemrnts more before posting. Speaking as someone who ACTUALLY owns the hardware i think my statements are valid.
A 21% increase in performance is not a good upgrade, especially for the price lol. 1500mhz Ti will do 21500~ on firestrike. You have to consider his OC@1500mhz is significant. Mine benched @ 1600mhz scored 23400~ on 3dmark, a TitanXP at stock scored 27900~. That's a ~15% increase, tell us how that is a decent upgrade, especially considering that retarded price. Besides, even 2100 TxP is pretty weak for 4k. Nothing out there is fast enough.
data/avatar/default/avatar22.webp
retarded price.
That's pretty much the size of it all IMO since the 10 series.
https://forums.guru3d.com/data/avatars/m/242/242573.jpg
Nobody has 10nm yet, so it's either a paper launch or it's not really Volta.
Volta isn't going to be on 10nm. Nvidia changed plans a while back because 16nm still has headroom and if history is anything to go by, 10nm will be late and early on very expensive with low yields. https://www.pcgamesn.com/nvidia/nvidia-volta-release-early
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
That most certainly isn't. [spoiler]https://abload.de/img/nvidia-hbm-memory-cri9ipva.jpg[/spoiler]
What a retarded graph? Are they spelling doom & gloom because they calculated that HBM2 would need 160W to reach 4000GB/s? Secondly, they have even HBM values wrong. HBM1 has peak power consumption 18W at 512GB/s. Their graph shows 40W at ~512GB/s. Real HBM1: 512GB/s @ 18W => 28.4GB/s per Watt Their HBM1: 512GB/s @ 40W => 12.8GB/s per Watt (GDDR5 today delivers like 12GB/s per Watt.) As for their HBM2 approximation: 1000GB/s @ 62W => 16GB/s per Watt 2000GB/s @ 125W => 16GB/s per Watt 1.5x efficient HBM2: 4000GB/s @ 160W => 25GB/s per Watt Now, counter slide from AMD showing much different transfer speed per Watt: (spoiler Alert, Image is huge) [spoiler]http://images.anandtech.com/doci/9266/HBM_10_Energy.png[/spoiler]
data/avatar/default/avatar37.webp
What a retarded graph? Are they spelling doom & gloom because they calculated that HBM2 would need 160W to reach 4000GB/s? Secondly, they have even HBM values wrong. HBM1 has peak power consumption 18W at 512GB/s. Their graph shows 40W at ~512GB/s.
Eh... any evidence they are talking about the same thing? And its retarded, as soon as it doesn't fit "overclockers dream" company data? Also, you say terrible and inconsistent roadmaps? (in that other thread) A bit harsh imho... I would hate to see what would become of AMD, only if Nvidia had half-decent roadmaps and their experts had any clue about the technology they are using
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
Eh... any evidence they are talking about the same thing? And its retarded, as soon as it doesn't fit "overclockers dream" company data? Also, you say terrible and inconsistent roadmaps? (in that other thread) A bit harsh imho... I would hate to see what would become of AMD, only if Nvidia had half-decent roadmaps and their experts had any clue about the technology they are using
Take it this way. If nVidia is right and 512GB/s HBM eats 40W, then AMD is band of Fools as they saved merely 20W by not using GDDR5. But still, My HBM tells different story. And AMD's materials tell different story. And as far as my memory goes, JDEC tells story very similar story to that of my Card and AMD's.