Nvidia announces Turing architecture for gpu's Quadro RTX8000, 6000, 5000

Published by

Click here to post a comment for Nvidia announces Turing architecture for gpu's Quadro RTX8000, 6000, 5000 on our message forum
https://forums.guru3d.com/data/avatars/m/202/202673.jpg
Fox2232:

That's why ATAA is presented as next thing, because it is AA based on thing Turing improved and therefore something which may be used to knockout older HW.
Some Volta owners are probably under another impression, as the Quadro version isn't even six months old...some of them probably thought nVidia were selling them a top of the line GPU at $9k, not a skipped architecture. 😕 Each architecture may have its own merits, dunno...
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Fox2232:

But here, numbers are laughable. When basic investment is sub 3ms, you do not want to spend another 28~40ms on AA. Secondly, improvement of regular SSAA seems to correlate exactly to base render, so no improvement there other than more massive GPU or higher clock. That's why ATAA is presented as next thing, because it is AA based on thing Turing improved and therefore something which may be used to knockout older HW..
The Siggraph presentation was for pre-production film rendering and not gaming - that's why the AA quality is set so high. Also if you're going to ship a large chunk of a GPU with dedicated raytracing hardware you might as will build some value-add around it. I think the bigger takeaway from that slide is that Turing is 750mm2 and is 50% faster than a Titan V (815mm2), Titan V is on average 25-30% faster than a 1080Ti.
Texter:

Some Volta owners are probably under another impression, as the Quadro version isn't even six months old...some of them probably thought nVidia were selling them a top of the line GPU at $9k, not a skipped architecture. 😕 Each architecture may have its own merits, dunno...
We don't know the FP64 performance of the Quadro RTX series or if the GDDR6 has ECC support. Could be targeting one Quadro towards high precision workloads and the other towards compute, INT4 workloads.
data/avatar/default/avatar28.webp
Usually when I question something I seek knowledge/advice from experts in the field.
Denial:

Idk the entire "revolutionary" bit is related to Raytracing - other than that it's just more of the same with some slight architecture tweaks. I don't really find the word revolutionary synonymous with affordable.
I think if you merge the individual components that are required for this to work you can begin to realize it is a new architecture. The quote and link below pretty much describes the pieces that make up the architecture:
NVIDIA Breaks New Ground with Turing GPU Architecture In a bid to reinvent computer graphics and visualization, NVIDIA has developed a new architecture that merges AI, ray tracing, rasterization, and computation. The new architecture, known as Turing, was unveiled this week by NVIDIA CEO Jensen Huang in his keynote address at SIGGRAPH 2018. A key element in the Turing architecture is the RT Cores, a specialized bit of circuitry that enables real-time ray tracing for accurate shadowing, reflections, refractions, and global illumination. Ray tracing essentially simulates light, which sounds simple enough, but it turns out to be very computationally intense. As the above product chart shows, the new Quadros can simulate up to 10 billion rays per second, which would be impossible with a more generic GPU design. The on-board memory is based on GDDR6, which is something of a departure from the Quadro GV100, which incorporated 32GB of HBM2 memory. Memory capacity on the new RTX processors can be effectively doubled by hooking two GPUs together via NVLink, making it possible to hold larger images in local memory. As usual, the SM will supply compute and graphics rasterization, but with a few twists. With Turing, NVIDIA has separated the floating point and integer pipelines so that they can operate simultaneously, a feature that is also available in the Volta V100. This enables the GPU to do address calculations and numerical calculation at the same time, which can be big time saver. As a result, the new Quadro chips can deliver up to 16 teraflops and 16 teraops of floating point and integer operations, respectively, in parallel. The SM also comes with a unified cache with double the bandwidth of the previous generation architecture. Perhaps the most interesting aspect to the new Quadro processors is the Turing Tensor Cores. For graphics and visualization work, the Tensor Cores can be used for things like AI-based denoising, deep learning anti-aliasing (DLAA), frame interpolation, and resolution scaling. These techniques can be used to reduce render time, increase image resolution, or create special effects The Turing Tensor Cores are similar to those in the Volta-based V100 GPU, but in this updated version NVIDIA has significantly boosted tensor calculations for INT8 (8-bit integer), which are commonly used for inferencing neural networks. In the V100, INT8 performance topped out at 62.8 teraops, but in the Quadro RTX chips, this has been boosted to a whopping 250 teraops. The new Tensor Cores also provide an INT4 (4-bit integer) capability for certain types of inferencing work that can get by with even less precision. That doubles the tensor performance to 500 teraops – half a petaop. The new Tensor Cores also provide 125 teraflops for FP16 data – same as the V100 – if for some reason you decide to use the Quadro chips for neural net training.
https://www.top500.org/news/nvidia-breaks-new-ground-with-turing-gpu-architecture/
https://forums.guru3d.com/data/avatars/m/242/242471.jpg
pharma:

Usually when I question something I seek knowledge/advice from experts in the field. I think if you merge the individual components that are required for this to work you can begin to realize it is a new architecture. The quote and link below pretty much describes the pieces that make up the architecture: https://www.top500.org/news/nvidia-breaks-new-ground-with-turing-gpu-architecture/
I like how they boosted tensor performance compared to 1st gen tensor cores. So this is basically a 2080gtx tensor performance., I think those 4 and 8bit will really shine, no one would notice 100% accurate or 50%, 75% less while in motion. If they can use it for shadows, AO, then maybe even for physx flex, smoke, particles, fluids
https://forums.guru3d.com/data/avatars/m/270/270233.jpg
Fox2232:

ATi constantly delivered better IQ and higher DX HW implementation sooner. One of those striking moments were times of GF4 Titanium. Powerful cards from nVidia, comparable IQ to ATi... as long as game was only DX8.0, because ATI already had DX 8.1 and there were some games. And that was not worst thing, nVidia released tons of DX7 only cards in GF4 line. That held game development back for at least 2 years as people would not just replace their new DX7 cards which performed reasonably well.
Ah yes, the infamous GeForce 4 MX. I remember one of my friends bragging that he bought a GeForce 4. He bought the cheaper MX version, but it was still a GeForce 4 so he was happy! I and others had to break it to him that it was just a rebranded GF 2, and he promptly returned it. The thing is, he wasn't a complete PC noob, but he still got taken in by the misleading branding. Gotta wonder how many others were duped into buying the MX? (probably a lot) Needless to say, it ranks as one of the most deceptive products in history.