Nvidia announces Turing architecture for gpu's Quadro RTX8000, 6000, 5000

Published by

Click here to post a comment for Nvidia announces Turing architecture for gpu's Quadro RTX8000, 6000, 5000 on our message forum
https://forums.guru3d.com/data/avatars/m/260/260048.jpg
Looking forward to next week!
https://forums.guru3d.com/data/avatars/m/242/242471.jpg
Right, I said it will have Tensor cores. 😉
data/avatar/default/avatar14.webp
Pricing looks great.... 10K for the RTX8000!! gotta love competition
https://forums.guru3d.com/data/avatars/m/270/270718.jpg
A lot of hype....but man if this lives up to the hype AMD have a mountain to climb to catch up...
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
It's so difficult for me to know how much of this is legitimately actually amazing vs Huang just spewing superlatives and patting himself on the back. I'm not questioning if Turing will be a great architecture; I know it will be. But what I don't think it's as revolutionary as he makes it out to be, particularly when it comes to existing hardware (let alone, affordability).
https://forums.guru3d.com/data/avatars/m/56/56686.jpg
HardwareCaps:

Pricing looks great.... 10K for the RTX8000!! gotta love competition
isnt it the greatest? but hey atlest there amd cpu department is actual worth damn again I still think "ATI" care were better competitions back when they were ATI then again what do I know
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
schmidtbag:

It's so difficult for me to know how much of this is legitimately actually amazing vs Huang just spewing superlatives and patting himself on the back. I'm not questioning if Turing will be a great architecture; I know it will be. But what I don't think it's as revolutionary as he makes it out to be, particularly when it comes to existing hardware (let alone, affordability).
Idk the entire "revolutionary" bit is related to Raytracing - other than that it's just more of the same with some slight architecture tweaks. I don't really find the word revolutionary synonymous with affordable. If Raytracing catches on in games (which I think it will) then obviously Nvidia is ahead of the pack not only because they basically pushed Microsoft to do DXR (so they know it best) but they've been practicing denoising algorithms (the means for their acceleration) for the last half decade and they are obviously the first to put dedicated hardware in a GPU that can accelerate DXR. For game developers DXR is a no brainer as it considerably reduces the workload on artists to come up with rendering hacks and can potentially significantly increase image quality. I assume this will go for whatever the Vulkan equivalent to DXR is. It will take time for game engines to start including DXR functionality and perhaps a generation or two before it's considered "mainstream" but I think it's definitely the future and if Nvidia's architecture accelerates it then I think it's safe to say what they are doing here is revolutionary, even though it may not lead to immediate benefits.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Denial:

Idk the entire "revolutionary" bit is related to Raytracing - other than that it's just more of the same with some slight architecture tweaks. I don't really find the word revolutionary synonymous with affordable.
I didn't suggest revolutionary was synonymous with affordability... I'm saying I think Huang is exaggerating how great Turing is, and something becomes a lot less impressive or interesting once you find out how expensive it is.
If Raytracing catches on in games (which I think it will) then obviously Nvidia is ahead of the pack not only because they basically pushed Microsoft to do DXR (so they know it best) but they've been practicing denoising algorithms (the means for their acceleration) for the last half decade and they are obviously the first to put dedicated hardware in a GPU that can accelerate DXR. For game developers DXR is a no brainer as it considerably reduces the workload on artists to come up with rendering hacks and can potentially significantly increase image quality. I assume this will go for whatever the Vulkan equivalent to DXR is.
If raytracing really takes off, it's going to take a long while. It all comes down to how it's implemented. If it ends up being too demanding on hardware without tensor cores, it's going to suffer the same fate as Physx. If current or next-gen consoles don't use DXR, very few PC games will either. So yes, if raytracing becomes a mainstream reality, Nvidia will have a major head-start. But that's a really big "if".
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
schmidtbag:

If raytracing really takes off, it's going to take a long while. It all comes down to how it's implemented. If it ends up being too demanding on hardware without tensor cores, it's going to suffer the same fate as Physx. If current or next-gen consoles don't use DXR, very few PC games will either. So yes, if raytracing becomes a mainstream reality, Nvidia will have a major head-start. But that's a really big "if".
AMD can already accelerate DXR with mixed math on Vega. Mixed math will most likely be on all cards going forward, mainstream and highend for both companies - the acceleration on mixed math won't be to the same degree as tensor but it will still be faster than FP32. I imagine games will have toggles for various effects that will utilize DXR, so depending on the level of acceleration some cards will be able to flip everything on, others not. Eventually, once mainstream cards all have proper acceleration to some industry defined level, all games will utilize DXR for their lighting systems. Raytracing just simplifies the process so much that it's a no brainer. It's not really an if, it's a when - you need the hardware to write the software.
DW75:

The 2080 is going to be insanely expensive. I bet we will be looking at an MSRP of 499 for the RTX 2070, and 699 for the RTX 2080. Miners and third party sellers will cause prices to skyrocket to 900 and 1200 though right after release.
If the 2080 is based on GV102 then it's as physically big as Nvidia can build a card, in which case it's replacing the Ti model and there will be no Ti unless it's on some kind of die shrink refresh. $700 for a Ti replacement doesn't seem that insane - same as the 1080Ti on launch and only $50 more than 980Ti.
data/avatar/default/avatar33.webp
alanm:

I wonder if he's not happy at Intel and looking for a new job at Nvidia.
Too early for that. He's still checking out the quality of the office ladies 🙂 And being a sport when it comes to competition.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
schmidtbag:

It's so difficult for me to know how much of this is legitimately actually amazing vs Huang just spewing superlatives and patting himself on the back. I'm not questioning if Turing will be a great architecture; I know it will be. But what I don't think it's as revolutionary as he makes it out to be, particularly when it comes to existing hardware (let alone, affordability).
If Huang had as many fingers as there are games which benefit from those technologies now, he would be fingerless. But it is nice that nVidia for once has HW implementation to support some standard at time standard is outed. Historically, it were other companies who invested time and transistors towards new technologies and nVidia halting industry development by not investing.
tsunami231:

isnt it the greatest? but hey atlest there amd cpu department is actual worth damn again I still think "ATI" care were better competitions back when they were ATI then again what do I know
ATi constantly delivered better IQ and higher DX HW implementation sooner. One of those striking moments were times of GF4 Titanium. Powerful cards from nVidia, comparable IQ to ATi... as long as game was only DX8.0, because ATI already had DX 8.1 and there were some games. And that was not worst thing, nVidia released tons of DX7 only cards in GF4 line. That held game development back for at least 2 years as people would not just replace their new DX7 cards which performed reasonably well.
data/avatar/default/avatar09.webp
Fox2232:

If Huang had as many fingers as there are games which benefit from those technologies now, he would be fingerless. But it is nice that nVidia for once has HW implementation to support some standard at time standard is outed. Historically, it were other companies who invested time and transistors towards new technologies and nVidia halting industry development by not investing.
New hardware is usually required when raising the bar to introduce a new method of developing & creating graphical realism. Nvidia has never been a slouch regarding investment and has been dedicating close to a third of their revenue into development efforts. The lingering question is can the competition (new and old) keep up since further developments on the benefits of RT cores and tensor cores will continue to evolve. From Industry comments it seems this was the "Holy Grail" that needed to be reached for true realism in gaming.
data/avatar/default/avatar07.webp
[youtube=KJRZTkttgLw]
https://forums.guru3d.com/data/avatars/m/271/271700.jpg
Ray tracing sounds and feels like a gimmick to me. Over my 40 years I have learned to trust my gut. I guess this is easy for me to say because I have no dog in this fight.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
pharma:

https://pbs.twimg.com/media/Dkq4XsTVsAAPJRN.jpg
Table above shows one thing clearly. One can use higher quality shadows and AO. But since those things eat very little computational time in current games, there will be tiny improvement for current games. This means that for nV to cash on those improvements, games they sponsor will have them set to higher details. (One can just hope that there will be option to set them to regular level at which previous generation does not lose 30% of fps just from having AO.)
pharma:

https://pbs.twimg.com/media/DkqK-93UYAEGq8B.jpg:large
But here, numbers are laughable. When basic investment is sub 3ms, you do not want to spend another 28~40ms on AA. Secondly, improvement of regular SSAA seems to correlate exactly to base render, so no improvement there other than more massive GPU or higher clock. That's why ATAA is presented as next thing, because it is AA based on thing Turing improved and therefore something which may be used to knockout older HW. Funny thing is that this is also comparison of SSAA 16x vs ATAA 8x. It is like apples to oranges.