Nvidia Turing GeForce 2080 (Ti) architecture review

Graphics cards 1049 Page 1 of 1 Published by

Click here to post a comment for Nvidia Turing GeForce 2080 (Ti) architecture review on our message forum
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
It all depends on the quality of the model training. Nvidia exaggerates always, so hold a smaller basket. It is still impressive and the only logical way forward. Both the machine learning and the partial ray tracing. If AMD was more liquid, we would have had these a bit earlier. People also forget that we will have an Intel GPU in a couple of years. Things are bound to be more interesting then.
https://forums.guru3d.com/data/avatars/m/165/165326.jpg
That's a lot of new tech pack into this new video cards , great review @Hilbert Hagedoorn , it looks good but that price it is what turns off many people. Nevertheless great to see new technology moving forward.
data/avatar/default/avatar26.webp
Any new RTX tech/process adopted by competitors will need to be different so as not to infringe on Nvidia's hardware and software IP. In that aspect it may take longer to develop as opposed to licensing.
https://forums.guru3d.com/data/avatars/m/225/225084.jpg
I personally think these first RTX cards will be a flop. They will sell but people have already realized that these cards with RT On won't be pushing out the performance they expected. Real-Time Ray Tracing might be a cool new feature but the hardware to run that type of tech is not here yet. I don't care too much about Ray Tracing if you can only hit 30-60fps at 1080p. 🙁
https://forums.guru3d.com/data/avatars/m/115/115462.jpg
Strictly aesthetically speaking, I really dislike the MSI Gaming this time around, the design is a mess. I wonder how much better the cooling is compared to the Duke, which I find to look a lot better. And of course I'm looking forward to all the Turing reviews like everybody else, despite the fact that I'll most likely skip this generation. Nice preview boss.
https://forums.guru3d.com/data/avatars/m/263/263710.jpg
https://image.ibb.co/b2QU0U/caesar2.jpg "The lines up top marked “1 Turing frame” are the key. For Nvidia’s other GPUs, it would consist of just one portion—the yellow FP32 line. But it’s more complicated in Turing. While that standard FP32 shader processing takes place, the dedicated RT cores and integer pipeline are executing their own specialized tasks at the same time. Once all that’s done, everything’s handed off to the tensor cores for the final 20 percent of a frame, which perform their machine learning-enhanced magic—such as denoising ray traced images or applying Deep Learning Super Sampling—in games that utilize Nvidia’s RTX technology stack." Source: PC WORLD https://image.ibb.co/euKuZp/caesar.jpg
https://forums.guru3d.com/data/avatars/m/240/240605.jpg
I cant afford this. Im poor 🙁 *sad, crying face* edit: also, good stuff Hilbert, thanks a lot :´)
https://forums.guru3d.com/data/avatars/m/196/196426.jpg
Somehow I feel that all the effort will be thrown on the backs of game developers, which need to recode their engines specifically for Turing to take advantage of the new stuff. And because the market for these will be so tiny in the first year or so... they won't bother, unless NV pays them big to do it. Without all the new fancy mumbo-jumbo into the game code, the mighty RTX cards will probably be, as all estimates show so far, only 15-25% faster than their Pascal counterparts, from the raw number of FP32 cores, and that only in 4K. Lower resolutions will see even less gain. Less than a week left, we'll see.
https://forums.guru3d.com/data/avatars/m/16/16662.jpg
Administrator
Agent-A01:

Can you post the full photo of DLSS comparison?
Added, just click the links under the image - be advised though, images are provided by NVIDIA so I cannot vouch for them until I tested it myself in games. The live demos I played at the NV event looks pretty sweet though.
data/avatar/default/avatar16.webp
I've ordered the Duke 2080Ti; after seeing it in the case on here.........wow, i'm really happy I did 😀
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
pharma:

Any new RTX tech/process adopted by competitors will need to be different so as not to infringe on Nvidia's hardware and software IP. In that aspect it may take longer to develop as opposed to licensing.
That's just a load of crap. Raytracing is something out of the 1960s. Intel had a demo for quake 3. It is also a dx extension. Every vendor can comply to it in any way they see fit. Nvidia chose dumb hardware units.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
PrMinisterGR:

That's just a load of crap. Raytracing is something out of the 1960s. Intel had a demo for quake 3. It is also a dx extension. Every vendor can comply to it in any way they see fit. Nvidia chose dumb hardware units.
They made it work, somehow at some visual quality. (BF5 statements here suggest that IQ will be lower than that in demos for additional performance... which is needed for 2070 anyway to reach 60fps.) And that method got tied to DX12 in some way which is exposed to nV's driver. Same calls should be exposed to AMD's and intel's drivers. And as you stated, implementation method on HW level of each GPU manufacturer is their own business. @pharma : AMD is not dumb, they can design CPUs and GPUs able to handle so many different types of instructions. Adding support for few more or designing special units for some purpose as nV did... not really problem. Question is: "How much time they had since DX12 RT was finalized? And were they involved/informed by MS in development phase?"
https://forums.guru3d.com/data/avatars/m/196/196426.jpg
PrMinisterGR:

Nvidia chose dumb hardware units.
Actually, I think they are using the Tensor cores from Volta repurposed for raytracing calculations using some firmware stuff. The sudden appearance of these "magical" RT cores and the fact that this is launched so close to Titan Volta is very likely not a coincidence. I imagine the discussion inside nvidia's board room: Management: "So, what do we do with our Tensor cores, they are kinda useless for anything but AI Inferencing" Engineer: "Errrr... I think we can do other calculations with them." Management: "Like what ?" Engineer: "Something simple enough... perhaps some geometric intersection ?" Engineer 2: "Hey guys, how about... Raytracing?" Management: "Can you do that ?" Engineer 2: "Yes, I think we can..." Engineer 2: "... but it won't be very fast... certainly not real-time" Engineer 1: "Maybe if we render some low-res Raytracing and upscale, it will be fast enough ?" Management: "Brilliant ! We'll have to figure out how to market that properly" Marketing: "Hehehehe, no worries! Stupid gamers will believe anything we tell them." Management: "Good. We're done here, get to work !"
https://forums.guru3d.com/data/avatars/m/221/221050.jpg
Awesome! Thanks for the write up and pics, I really love the look of the stock cards best they have ever been by a long shoot. My self I have my eye on the MSI RTX 2080 Ti Gaming TRIO, it looks really nice. Am dying to read your mega review and see how the Ti does, I really hope it will be 50% over a 1080 Ti. Will see soon what the 411 is, happy testing/reviewing! 🙂 [ Edit ] What the heck I noticed the MSI RTX 2080 Ti Gaming TRIO has 3 power inputs!? is that a first? It has 2x 4 and 1x 3 that is crazy!
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
Their approach makes sense guys.
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
wavetrex:

Somehow I feel that all the effort will be thrown on the backs of game developers, which need to recode their engines specifically for Turing to take advantage of the new stuff. And because the market for these will be so tiny in the first year or so... they won't bother, unless NV pays them big to do it. Without all the new fancy mumbo-jumbo into the game code, the mighty RTX cards will probably be, as all estimates show so far, only 15-25% faster than their Pascal counterparts, from the raw number of FP32 cores, and that only in 4K. Lower resolutions will see even less gain. Less than a week left, we'll see.
Yuppers. if this wasn't a panicked rush job, Nvidia would have several game developers under NDA since January. Nvidia knows what is involved in the making of a game, the timing of the roll-out was a bug, not a feature. again, the node
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
This doesn't feel like a rush job at all. Possibly doing this at "12nm" was, but this could have been the silicon after Maxwell.
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
PrMinisterGR:

This doesn't feel like a rush job at all. Possibly doing this at "12nm" was, but this could have been the silicon after Maxwell.
au contraire, mon frere. this is exactly a rush job. the foundry developments are outside of their control, but not foresight. Nvidia were complacent while others kept abreast of developments at the fabs, specifically TSMC. 7nm was the crowning achievement and retirement note of the founder of TSMC, Morris Chang. at this very moment they are in full production and are contracted for full production for the next six months @ 7nm with contracts for Apple, AMD, and Qualcomm at guaranteed production targets. the participation of those companies are a direct result of their help capitalizing the spanking new fabs. Nvidia had a chance to participate and passed. now they're desperate along with Intel for the very similar reason of being rudely awoken.