Nvidia Turing GeForce 2080 (Ti) architecture review

Graphics cards 1049 Page 1 of 1 Published by

Click here to post a comment for Nvidia Turing GeForce 2080 (Ti) architecture review on our message forum
https://forums.guru3d.com/data/avatars/m/259/259298.jpg
I like the look of the Duke better than the Gaming Trio.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
DLSS = - Take angle under which 2 edges intersect - take color information at edge - load resulting pixel values from database Variable rate shading = - 1/2 precision - 1/4 precision - 1/8th precision - and lovely 1/16th precision This for sure boosts performance. Gamers will have mandatory camera tracking their eyes... TSS = - Bake in results into texture on the fly - use old information to skip actual work - update baked in texture from time to time (or at rate you feel comfortable with) - probably would not be as bad as it seems if we had higher than 16xAF To sum it up: New features can be paraphrased as "Way to cheap out IQ for performance gain." Maybe good for 8K, somewhat OK for 4K. But hit to per pixel quality on 1440p will be unpleasant. On 1080p unacceptable.
https://forums.guru3d.com/data/avatars/m/274/274006.jpg
Same here, Duke all the way. Regarding the architecture, it's clear where the gaps are that will be filled by the 2080+ and the Titan X (looks like there won't be a 2070 Ti). Anticipation for the reviews at like at 11.
https://forums.guru3d.com/data/avatars/m/242/242471.jpg
Wow this 2080rtx is crippled like shit with tensor cores compared to TI variant nvm 2070..
data/avatar/default/avatar04.webp
OC/2200 Ghz o_O nice
https://forums.guru3d.com/data/avatars/m/245/245459.jpg
-Tj-:

Wow this 2080rtx is crippled like crap with tensor cores compared to TI variant nvm 2070..
Yeah, comparitively small gaps between 2070 up to 2080, and then 2080ti comes along & massively increases the gap in specs. I like the look of the Duke in comparison with the other MSI card, like someone already mentioned, I also like the NVidia Founders aesthetic. Looking forward to the reviews, and will read up on the associated tech of the architecture a bit more once the card reviews are out, I understand/know some of it, but not all.
data/avatar/default/avatar26.webp
The dlss comparison shot is really tiny.
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
At least this way 2070 won't be made using second-rate chips that weren't good enough for 2080.
https://forums.guru3d.com/data/avatars/m/204/204717.jpg
C'mon HH, show us the numbers =D
https://forums.guru3d.com/data/avatars/m/231/231931.jpg
@Hilbert Hagedoorn Thanks for the article. Can you post the full photo of DLSS comparison? Also can you comment on this "Another important change with NVLink SLI is that now each GPU can access the other's memory in a cache-coherent way, which lets them combine framebuffer sizes- something that was not possible with SLI before. The underlying reason is that the old SLI link was used to only transfer the final rendered frames to the master GPU, which would then combine them with its own frame data and then output the combined image on the monitor. In framebuffer-combined mode, each GPU will automatically route memory requests to the correct card no matter which GPU is asking for which chunk of memory." You said that RTX cannot share memory. Is this a software limitation where shared memory-access is limited to 'prosumer' cards?
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
Agent-A01:

@Hilbert Hagedoorn Thanks for the article. Can you post the full photo of DLSS comparison? Also can you comment on this "Another important change with NVLink SLI is that now each GPU can access the other's memory in a cache-coherent way, which lets them combine framebuffer sizes- something that was not possible with SLI before. The underlying reason is that the old SLI link was used to only transfer the final rendered frames to the master GPU, which would then combine them with its own frame data and then output the combined image on the monitor. In framebuffer-combined mode, each GPU will automatically route memory requests to the correct card no matter which GPU is asking for which chunk of memory." You said that RTX cannot share memory. Is this a software limitation where shared memory-access is limited to 'prosumer' cards?
It does not matter for gaming. Sharing even 1GB of VRAM via 100GB/s available for 2080Ti will add 10ms to rendering time (ignoring actual latency of random accesses themselves). You would be better off paying extra 1GB of VRAM on each card and paying for additional 32bit on IMC for that 1GB.
https://forums.guru3d.com/data/avatars/m/216/216349.jpg
First of all, very nice article, i managed to understand most of it... Can´t wait for the review! That Duke card is really a looker although my Duke card looks even better. Too bad MSI managed to ruin the looks of the gaming version... Nvidia´s FE look a lot like the cards from Pallit, at least to me. Also 79$ for a SLI bridge??? Nvidia needs to stop copying Apple´s business practices...
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
just wow at these prices. given that there is no need for any of these products the enthusiast market will quickly be tapped. and this is exactly why the 10xx is slow-walking its E.O.L. even a competitive gamer can buy a 1080ti (at the lowest ever price) for half as much and have better performance in some scenarios. my gawd, you can even buy a 1080 and a g-sync monitor together for less money... i'm shocked i can even say that. all of this just to beat out C.E.S. and AMD's announcement of the first mid-level card with high-end performance.
https://forums.guru3d.com/data/avatars/m/240/240605.jpg
Yeah, what gives these prices? They´re stupid man... 🙁 For real, i can remember a time not too long ago when the top dawg went for 600 bucks.
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
Great job Hilbert! I know that everybody is justified to talk about the prices, but the tech itself looks crazy, I am genuinely impressed, especially considering that Nvidia didn't have to innovate at all. Pascal with +40% Shader engines would be enough for everyone to be satisfied, and much cheaper to produce. Applause for the leather jacket. I expect the initial benchmarks to be underwhelming, as this is a new architecture, but if Nvidia has a good driver for this at a point, it should be night and day with Pascal. The potential of the Tensor cores alone is insane.
https://forums.guru3d.com/data/avatars/m/196/196426.jpg
MK80:

OC/2200 Ghz o_O nice
I can do ~2100 with my 1080 if I want to, and running at 1930'ish out of the box in the games that I play. 2200 is unimpressive at all (but expected, since this is basically the same node, just slightly refined), and it will probably consume HUGE amounts of power at that frequency, given that Turing is a bigger chip with all those fancy semi-useless raytracing corez. Really curious what the benchmarks will show compared to SIMILARLY PRICED Pascal: So 2070 results vs 1080 Ti, and not vs 1070.
https://forums.guru3d.com/data/avatars/m/270/270008.jpg
Can't wait to see testing results. I have this sneaking suspicion that the improvements are not that great unless games are making use of new features. Just going off the onslaught of pre-launch marketing that we have seen.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
PrMinisterGR:

Great job Hilbert! I know that everybody is justified to talk about the prices, but the tech itself looks crazy, I am genuinely impressed, especially considering that Nvidia didn't have to innovate at all. Pascal with +40% Shader engines would be enough for everyone to be satisfied, and much cheaper to produce. Applause for the leather jacket. I expect the initial benchmarks to be underwhelming, as this is a new architecture, but if Nvidia has a good driver for this at a point, it should be night and day with Pascal. The potential of the Tensor cores alone is insane.
Haven't seen you in a bit - how are things going?
Agent-A01:

@Hilbert Hagedoorn Thanks for the article. Can you post the full photo of DLSS comparison? Also can you comment on this "Another important change with NVLink SLI is that now each GPU can access the other's memory in a cache-coherent way, which lets them combine framebuffer sizes- something that was not possible with SLI before. The underlying reason is that the old SLI link was used to only transfer the final rendered frames to the master GPU, which would then combine them with its own frame data and then output the combined image on the monitor. In framebuffer-combined mode, each GPU will automatically route memory requests to the correct card no matter which GPU is asking for which chunk of memory." You said that RTX cannot share memory. Is this a software limitation where shared memory-access is limited to 'prosumer' cards?
I think that its more limited to prosumer workloads. The latency access across NVLink is too slow to complete rasterization tasks in the time required for realtime frame rendering. With certain workloads like data modeling and stuff, the card doesn't have to spit out a frame in X(ms).. it can take longer, in which case this can be useful. There may be upcoming graphics techniques where this can be applied but at the moment mGPU/SLI will continue to just share the frame buffer at the end.
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
Denial:

Haven't seen you in a bit - how are things going? I think that its more limited to prosumer workloads. The latency access across NVLink is too slow to complete rasterization tasks in the time required for realtime frame rendering. With certain workloads like data modeling and stuff, the card doesn't have to spit out a frame in X(ms).. it can take longer, in which case this can be useful. There may be upcoming graphics techniques where this can be applied but at the moment mGPU/SLI will continue to just share the frame buffer at the end.
Hey man, just fine, but a different job role, I have missed you guys 🙂 I actually believe that AMD will compete with all this very soon, just not in the ultra high end. 750+mm wafers are an insane risk.