Review: Nvidia Turing architecture - GeForce RTX 2080 and 2080 Ti
Click here to post a comment for Review: Nvidia Turing architecture - GeForce RTX 2080 and 2080 Ti on our message forum
Darksword
I like the look of the Duke better than the Gaming Trio.
Fox2232
DLSS =
- Take angle under which 2 edges intersect
- take color information at edge
- load resulting pixel values from database
Variable rate shading =
- 1/2 precision
- 1/4 precision
- 1/8th precision
- and lovely 1/16th precision
This for sure boosts performance. Gamers will have mandatory camera tracking their eyes...
TSS =
- Bake in results into texture on the fly
- use old information to skip actual work
- update baked in texture from time to time (or at rate you feel comfortable with)
- probably would not be as bad as it seems if we had higher than 16xAF
To sum it up: New features can be paraphrased as "Way to cheap out IQ for performance gain."
Maybe good for 8K, somewhat OK for 4K. But hit to per pixel quality on 1440p will be unpleasant. On 1080p unacceptable.
SniperX
Same here, Duke all the way. Regarding the architecture, it's clear where the gaps are that will be filled by the 2080+ and the Titan X (looks like there won't be a 2070 Ti). Anticipation for the reviews at like at 11.
-Tj-
Wow this 2080rtx is crippled like shit with tensor cores compared to TI variant nvm 2070..
MK80
OC/2200 Ghz o_O nice
Robbo9999
Monchis
The dlss comparison shot is really tiny.
Kaarme
At least this way 2070 won't be made using second-rate chips that weren't good enough for 2080.
scoter man1
C'mon HH, show us the numbers =D
Agent-A01
@Hilbert Hagedoorn
Thanks for the article.
Can you post the full photo of DLSS comparison?
Also can you comment on this
"Another important change with NVLink SLI is that now each GPU can access the other's memory in a cache-coherent way, which lets them combine framebuffer sizes- something that was not possible with SLI before. The underlying reason is that the old SLI link was used to only transfer the final rendered frames to the master GPU, which would then combine them with its own frame data and then output the combined image on the monitor. In framebuffer-combined mode, each GPU will automatically route memory requests to the correct card no matter which GPU is asking for which chunk of memory."
You said that RTX cannot share memory.
Is this a software limitation where shared memory-access is limited to 'prosumer' cards?
Fox2232
H83
First of all, very nice article, i managed to understand most of it... Can´t wait for the review!
That Duke card is really a looker although my Duke card looks even better. Too bad MSI managed to ruin the looks of the gaming version... Nvidia´s FE look a lot like the cards from Pallit, at least to me.
Also 79$ for a SLI bridge??? Nvidia needs to stop copying Apple´s business practices...
tunejunky
just wow at these prices.
given that there is no need for any of these products the enthusiast market will quickly be tapped. and this is exactly why the 10xx is slow-walking its E.O.L.
even a competitive gamer can buy a 1080ti (at the lowest ever price) for half as much and have better performance in some scenarios.
my gawd, you can even buy a 1080 and a g-sync monitor together for less money... i'm shocked i can even say that.
all of this just to beat out C.E.S. and AMD's announcement of the first mid-level card with high-end performance.
StewieTech
Yeah, what gives these prices? They´re stupid man... 🙁 For real, i can remember a time not too long ago when the top dawg went for 600 bucks.
PrMinisterGR
Great job Hilbert!
I know that everybody is justified to talk about the prices, but the tech itself looks crazy, I am genuinely impressed, especially considering that Nvidia didn't have to innovate at all. Pascal with +40% Shader engines would be enough for everyone to be satisfied, and much cheaper to produce.
Applause for the leather jacket.
I expect the initial benchmarks to be underwhelming, as this is a new architecture, but if Nvidia has a good driver for this at a point, it should be night and day with Pascal. The potential of the Tensor cores alone is insane.
wavetrex
JamesSneed
Can't wait to see testing results. I have this sneaking suspicion that the improvements are not that great unless games are making use of new features. Just going off the onslaught of pre-launch marketing that we have seen.
Denial
PrMinisterGR
Noisiv
This is just an appetizer! Let the crow eating begin:
https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/technologies/turing-architecture/NVIDIA-Turing-Architecture-Whitepaper.pdf