Nvidia Turing GeForce 2080 (Ti) architecture review

Graphics cards 1049 Page 1 of 1 Published by

Click here to post a comment for Nvidia Turing GeForce 2080 (Ti) architecture review on our message forum
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
just a note to facts if i'm perceived as being one sided: Nvidia is an excellent company i buy (entirely) too much of. that doesn't make them wise men by itself. they goofed up before and they'll do it again. I definitely am not saying Ray Tracing is a bust or waste of time, it's rushed in to gain market share to attempt to create a "fact on the ground" and distract from the truth that the entire Pascal line should've been refreshed at a smaller node, while Ray Tracing should have waited for 7nm. GP102 and 104 would crush at 7nm unchanged except for node. the reality is that Navi will come in at less than half of the price of the RTX 2080, less than the (gelded) RTX 2070 and perform ridiculously well despite not being as efficient as a typical Nvidia design. $300-$400 for Free-sync 4k ready cards sounds damn good to me and most (other than us) buyers will ask "why bother?" spending more
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
Nvidia got a custom node with tsmc, it is beyond naíve to belive they had no idea of the status of the new node. It seems more and more that this was a long term 16nm design that came into fruition due to market pressure.
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
PrMinisterGR:

Nvidia got a custom node with tsmc, it is beyond naíve to belive they had no idea of the status of the new node. It seems more and more that this was a long term 16nm design that came into fruition due to market pressure.
who said they didn't know???? i said they passed.
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
Why would they do that, if it forces them to rely on 750+mm2 dies?
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
tunejunky:

just a note to facts if i'm perceived as being one sided: Nvidia is an excellent company i buy (entirely) too much of. that doesn't make them wise men by itself. they goofed up before and they'll do it again. I definitely am not saying Ray Tracing is a bust or waste of time, it's rushed in to gain market share to attempt to create a "fact on the ground" and distract from the truth that the entire Pascal line should've been refreshed at a smaller node, while Ray Tracing should have waited for 7nm. GP102 and 104 would crush at 7nm unchanged except for node. the reality is that Navi will come in at less than half of the price of the RTX 2080, less than the (gelded) RTX 2070 and perform ridiculously well despite not being as efficient as a typical Nvidia design. $300-$400 for Free-sync 4k ready cards sounds damn good to me and most (other than us) buyers will ask "why bother?" spending more
I do not want to drag in Navi. Especially good things which came my way. But it is unlikely to be competitive with nV's solutions like 2070+ at first due to target sizes of chips. But there where it comes... And it is not going to be here as quickly as some think. nV has their time and till AMD starts showing demos, people will buy those cards at whatever price. It is similar to intel new soldered chips. Till you see Zen2 under microscope...
PrMinisterGR:

Why would they do that, if it forces them to rely on 750+mm2 dies?
Cost estimation. That's why AMD's 7nm starts at high margin market.
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
So they know what they're doing, because that's my idea about it too. This doesn't feel like a rushed product at all.
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
again, sticking to facts; if Samsung didn't believe in the progress of AMD/TSMC they never would've built Free-Sync into their 2018 onwards line-ups. also guys... for real... Moore's Law is dead. that is why Nvidia, having pushed the silicon envelope for chip size with nowhere to go on the wafer is desperate to differentiate. that's just a fact and the market. it's neither good or bad, just the way it is and they aren't the only ones behind the 8 ball...so is Intel. SoC's are the way of the future...so much so that we are already thinking of some SoC's as "chips". the majority of R&D money is looking at the interpolators, pitch size, materials, busses and every relative factor that can be improved as getting smaller is getting to be much bigger science than anyone thought.
https://forums.guru3d.com/data/avatars/m/216/216490.jpg
That 2080Ti Founders looks absolutely stunning in HH's test bench! 😀 One week to go for numbers!
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
tunejunky:

again, sticking to facts; if Samsung didn't believe in the progress of AMD/TSMC they never would've built Free-Sync into their 2018 onwards line-ups.
Freesync is just the name for an already existing open standard that was there in laptops. It is free, so Samsung and Microsoft are using it. It is also part of HDMI 2.1. It is also irrelevant to a conversation about nvidia and their yields. It is obvious that nvidia didn't belive that they could have cards in time with the required quantities at 7nm. It is also obvious that nvidia has a really special relationship with tsmc, hence the custom "12nm" node. If 7nm was ok or enough for nvidia, nvidia GPUs would be 7nm, it is simple as that.
tunejunky:

also guys... for real... Moore's Law is dead. that is why Nvidia, having pushed the silicon envelope for chip size with nowhere to go on the wafer is desperate to differentiate. that's just a fact and the market. it's neither good or bad, just the way it is and they aren't the only ones behind the 8 ball...so is Intel.
They could have just made large Pascal with 6k Cuda cores and wreck everything. It is quite obvious that pure rasterization is a dead end at this point, I don't know why people are so triggered by this.
tunejunky:

SoC's are the way of the future...so much so that we are already thinking of some SoC's as "chips". the majority of R&D money is looking at the interpolators, pitch size, materials, busses and every relative factor that can be improved as getting smaller is getting to be much bigger science than anyone thought.
If you mean chiplets, then you're probably right, but the main issue there is still latencies and interposer costs.
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
If these numbers are true then that explains the pricing. Nvidia will keep pascal up to the 1070, and then sell these as the top.
https://forums.guru3d.com/data/avatars/m/258/258688.jpg
I will be flabbergasted if these cards come anywhere near doing what nVidia is hyping them to do...;) We shall see. nV has never let me down on the hype yet--which is the reason I don't/won't buy their products. But each to his own...vee haf our vays und vee shall see...should be fun!
https://forums.guru3d.com/data/avatars/m/38/38428.jpg
A few days ago I managed to pre-order the MSI GeForce RTX 2080 GAMING X TRIO on sale at Amazon. It was out of stock but I had an auto-notify enabled and I jumped on it when it came back in stock (very briefly). 50 bucks off! I have to pay sales tax but with Amazon Prime I get 5% back on the purchase. The Trio has the third generation of the Torx fans, the Duke has the, still excellent, second generation ones. It has two eight pin power connectors while the Duke has one eight, and one six. Though I think that's likely irrelevant, it's better to have it and not need it than to want it and not have it. On the one hand the last generation was largely indifferent to above reference power levels, but on the other hand nVidia has suggested that these chips could be good overclockers. Though on the gripping hand nVidia FE 2080 cards stick to one six and one eight. I think someone from nVidia suggested a max of either 270 or 280 watts being used after a full overclock. I can't swear to that coming from nVidia, it might have been the article's author extrapolating from nVidia's posted numbers. Another thing is that I saw a MSI rep do a long video where he had "engineering samples", this was from late August, and he showed how the Trio used a PCB that was longer than the reference design. He didn't commit, but he noted that that that obviously suggested some extra MSI goodness going into that board. The card is as heavy as heck, (MSI is one of the very few who have posted full length, width, height, weight, numbers) but I checked some reviews of the last generation Trio cards and one reviewer used a laser to confirm that the card didn't sag when installed. MSI uses some internal bracing that does double duty by also cooling VRMs and the MOSFET, IIRC. Even heavier is the Trio Ti. Might be the heaviest card ever. The last generation included an optional brace as an accessory for the Trio cards, no word if either the new Trio or Trio Ti will get that, even though the 2080 versions are a tad heavier. Maybe the bracing has been improved? I don't have one of the newer elite motherboards ( I have an ASUS H170 Pro) that have a reinforced PCI Express slot for the card, so I'm a bit concerned. P.S. It has that smaller third fan because hidden away on top of it is the NVLink connector. the piece hiding it from view appears to be removable. I've been saving for this for a while, and my current GTX 1070 will likely go to a very deserving friend of mine. Lol, not totally selfish here! 🙂
https://forums.guru3d.com/data/avatars/m/242/242573.jpg
Agent-A01:

@Hilbert Hagedoorn Thanks for the article. Can you post the full photo of DLSS comparison? Also can you comment on this "Another important change with NVLink SLI is that now each GPU can access the other's memory in a cache-coherent way, which lets them combine framebuffer sizes- something that was not possible with SLI before. The underlying reason is that the old SLI link was used to only transfer the final rendered frames to the master GPU, which would then combine them with its own frame data and then output the combined image on the monitor. In framebuffer-combined mode, each GPU will automatically route memory requests to the correct card no matter which GPU is asking for which chunk of memory." You said that RTX cannot share memory. Is this a software limitation where shared memory-access is limited to 'prosumer' cards?
Yeah. I'm a bit confused. I also found this that was posted on nVidia devblog about 12 hours ago:
Turing TU102 and TU104 GPUs incorporate NVIDIA’s NVLink™ high-speed interconnect to provide dependable, high bandwidth and low latency connectivity between pairs of Turing GPUs. With up to 100GB/sec of bidirectional bandwidth, NVLINK makes it possible for customized many workloads to efficiently split across two GPUs and share memory capacity. For gaming workloads, NVLINK’s increased bandwidth and dedicated inter-GPU channel enables new possibilities for SLI, such as new modes or higher resolution display configurations. For large memory workloads, including professional ray tracing applications, scene data can be split across the frame buffer of both GPUs
H83:

First of all, very nice article, i managed to understand most of it... Can´t wait for the review! That Duke card is really a looker although my Duke card looks even better. Too bad MSI managed to ruin the looks of the gaming version... Nvidia´s FE look a lot like the cards from Pallit, at least to me. Also 79$ for a SLI bridge??? Nvidia needs to stop copying Apple´s business practices...
You clearly haven't seen how much they cost for the GV100 Quadro.
https://forums.guru3d.com/data/avatars/m/242/242573.jpg
Reddoguk:

I personally think these first RTX cards will be a flop. They will sell but people have already realized that these cards with RT On won't be pushing out the performance they expected. Real-Time Ray Tracing might be a cool new feature but the hardware to run that type of tech is not here yet. I don't care too much about Ray Tracing if you can only hit 30-60fps at 1080p. 🙁
*yawn* Is that why pretty much every retailer is sold out on the 2080 ti pre-orders? Try not to get wishful thinking confused with reality, or you'll be joining the other naysayers who are too busy eating crow to comment.
data/avatar/default/avatar14.webp
Andrew LB:

*yawn* Is that why pretty much every retailer is sold out on the 2080 ti pre-orders? Try not to get wishful thinking confused with reality, or you'll be joining the other naysayers who are too busy eating crow to comment.
this is nvidia we are talking about, they could have easily created a false sense of demand by having low pre-orders available.
https://forums.guru3d.com/data/avatars/m/226/226700.jpg
Great article Hilbert! I really appreciate your expert involvement in assessing these new graphics cards.
https://forums.guru3d.com/data/avatars/m/79/79740.jpg
--
Andrew LB:

*yawn* Is that why pretty much every retailer is sold out on the 2080 ti pre-orders? Try not to get wishful thinking confused with reality, or you'll be joining the other naysayers who are too busy eating crow to comment.
Any product can easily sell out with very low initial stocks. There is a difference between selling out 1,000,000 units vs selling out of only 20 units.
data/avatar/default/avatar07.webp
Fox2232:

DLSS = - Take angle under which 2 edges intersect - take color information at edge - load resulting pixel values from database
Where did you get this information from?