Rumor: AMD's Navi 21 GPU has 255 watts TGP and Boost up-to 2.4 GHz
AMD will reveal its first RDNA 2 Radeon based products on October 28th. However new specs of the AMD Navi 21 XT “Big Navi” GPU for the Radeon RX 6900 XT graphics card have been reported by a credible dude with information. Based on the reported specs, the AMD Radeon RX 6900 XT seems to be a graphics card beast with good GPU clocks.
In his tweet, Patrick Schur claims that the AMD Navi 21 XT GPU, which is supposedly the "Big Navi" GPU that all enthusiastic gamers have been waiting for, will be used in the Radeon RX 6900 XT graphics card; there are also reports that a faster XTX variant exists. The Navi 21 XT GPU will reportedly have a powerful chip clocked at 2.4 GHz, the highest ever on a desktop graphics card. This also suggests that we can expect even higher Turbo frequencies. These days, the thing with AMD cards is that the turbo boost and actual gaming frequency are two different things. So we're not quite sure what to make of that.
The GPU is also said to feature a TGP (total graphics power) of 255W. This does not mean all the power of the card, but the GPU itself. The GeForce RTX 3080 has a TDP of 320W, and the GeForce RTX 3070 has a TDP of 220W. The graphics card is said to have 16GB of GDDR6 memory, which is also what we have seen in several previous leaks. As for the benchmarks, the performance seems very close to that of the NVIDIA GeForce RTX 3080 graphics card. AMD will officially introduce its family of Radeon RX 6000 series graphics cards on October 28.
Rumor: AMD Radeon Big Navi has 16GB VRAM (NAVI21), and 12GB for NAVI22 - 09/22/2020 06:01 PM
As we get closer to its announcements, AMD Big Navi is around the corner and they would like some momentum. More rumors are spreading the spiders' cyber web right now. AMD would be announcing two Big...
Rumor: Ampere based GeForce RTX 3000 series might arrive earlier than expected - 07/23/2020 09:43 AM
Anything regarding this product series is kept under a tight leash from NVIDIA, however winner might be coming sooner than expected, French overclocking reports that the announcements and release date...
Rumor: AMD allegedly to release seven NAVI GPUs but faces efficiency issues - 05/06/2019 09:20 AM
YouTube channel Adored TV discussed alleged specifications of AMD's upcoming Navi GPUs in a video. It is mentioned that AMD will release the GPUs as a Radeon RX 30xx series with the Radeon RX 3090 X...
Rumor: AMD Seeds Board partners Ryzen 3000 Samples - Runs 4.5 GHz and show 15% Extra IPC - 04/29/2019 04:52 PM
Well, loads of AMD related news today. Earlier today in this news item we discussed the X570 chipset, well, that same source mentions some more spicy info on their website. Apparently, AMD Ryzen samp...
Rumor: AMD Epyc2 processors could get 64 cores over 8+1 dies - 10/31/2018 02:48 PM
That AMD has been going insanely strong with many-core processors is not a surprise, you've read all our Threadripper reviews and have learned that the top tier processors (e.g. 32-core versions) hav...
Senior Member
Posts: 13799
Joined: 2004-05-16
Nobody was forced to buy Turing as you can see in GPU stats in this very thread. (dominated by nV users)
Same way as nobody was forced to buy Radeon VII with 4 stacks of HBM2 totaling 16GB VRAM on 7nm meant mainly for productivity.
1TFLOP of FP32 per 1B transistor is not exactly bad considering release date. And in this nVidia did outmatched it now with Ampere.
Gaming wise it was dead upon arrival. So that's likely main reason why people do not bring it into gaming comparisons. I do not expect you to bring CDNA with same argument later.
Or Threadripper CPUs meant for similar segment into gaming comparisons. Sure, one can be interested in seeing how they do, but saying that productivity silicon is flop for gaming... bit too much.
And as for profit margin. That's another incorrectly read information. They got back more money for same investment per sold card than before. They simply asked too much and sales did not make sufficient returns to offset R&D + operations + ... in comparison to previous results. (Still damn high profit.)
They could have made 10% more by selling much higher volume at much nicer price. But then they would sacrifice future sales in process. It was all calculated and nVidia knew exactly what they were doing.
Gaming wise it was a stop gap to introduce a bunch of paradigm shift technologies to lay the groundwork for the future. Obviously it was all calculated and Nvidia knew what it was doing. That's my entire point in the later posts in this thread. Ampere just builds off that and it seems like AMD is going to fall right in the same ballpark performance/cost wise - despite not having dedicated silicon.
I just don't see how Ampere is the Turing Formula. Turing was a pause in performance for a massive feature gain. Ampere is just building off that feature gain, with a 40% performance increase at Pascal prices. You're not going to get 80%+ performance gains anymore. It just isn't going to happen. Even if Nvidia was on 7nm TSMC, they wouldn't get that kind of gain.
Senior Member
Posts: 5802
Joined: 2003-09-15
As far as I'm concerned, years ago JHH promised they were working towards real-time raytracing. They're delivering on that promise. It's early days, but, all the major game engines are ready to deliver as well.
Tell me, what's AMD/Ati's vision that they've delivered on? If it's Fusion, then, you might as well be a next-gen console gamer.
Senior Member
Posts: 11809
Joined: 2012-07-20
Gaming wise it was a stop gap to introduce a bunch of paradigm shift technologies to lay the groundwork for the future. Obviously it was all calculated and Nvidia knew what it was doing. That's my entire point in the later posts in this thread. Ampere just builds off that and it seems like AMD is going to fall right in the same ballpark performance/cost wise - despite not having dedicated silicon.
I just don't see how Ampere is the Turing Formula. Turing was a pause in performance for a massive feature gain. Ampere is just building off that feature gain, with a 40% performance increase at Pascal prices. You're not going to get 80%+ performance gains anymore. It just isn't going to happen. Even if Nvidia was on 7nm TSMC, they wouldn't get that kind of gain.
I remember you saying that you care little about power draw as long as Performance per Watt is good. Ampere for example did not really improved much on that front from Turing.
While I see Ampere architecture to have good and desirable changes. It certainly suffers from something. Maybe it is Samsung's "8nm", maybe something else.
But I am sure about few things. And they start with certainty that people around are not sure about anything around RDNA2. Every single rumor which claims "credible leaker" is either completely fake for views or echo of fake. Seen same people changing back and forth between "RDNA2 = failure" & "nVidia is dead".
Those people have no sources, they read something from videocardz or other web and spend time making video around it. Or fake it as this is best time to get views and subscribers.
And people repeat it. Anyone worth their job will rather bite their lip than go and correct wrong statements ahead of schedule. (That's why there is such radio silence.)
I go with confirmed information. And those are data flowing from PS5 and XSX. Then it is about doing estimations. Apparently very inaccurate as even getting power draw for GPU in console wrong by 10% will change resulting performance per watt of desktop GPU even by 20%.
And it may happen that AMD has cards competing with 3080 here and there. But question is at what power draw and what price. 3080 has still same big GPU as 3090.
AMD does not need 28B transistors (and likely has smaller die) to reach same performance due to clock. And if any of those rumors are true, they do not need extra memory cost either.
Sure, they will likely price similarly performing card at similar price with confidence due to nVidia's current supply situation. But then there will still be power draw difference at that performance level.
Senior Member
Posts: 13629
Joined: 2018-03-21
Steam surveys are not about marketshare, never have been.
The only thing they measure since the survey is not enforced and done periodically on the same day every year is that people are recently buying X products and using Y edition of operating systems in NEW installs.
the share of older generations of CPU's and windows versions are artificially impaired and the appearance of new cpu's in use inflated.
Yes.... because AMD haven't been pulling the same shite for over a decade.....
"You don't need more than 4GB with HBM"
"These games have tesselation too high" (no, those games use a tesselation factor as the specification allows and your chips aren't capable of it because of poor warp design choices)
Senior Member
Posts: 11809
Joined: 2012-07-20
Cheaper manufacturing process but they were also massive cards. Their profit margins didn't significantly increase over the last two years - in fact it actually dipped. $700 for a 40% performance increase on today's nodes isn't a bad improvement, regardless to previous generation pricing.
Keep in mind VII was also $700 for a card that went defunct in a year but for some reason everyone writes that off as a one time event or something.
Nobody was forced to buy Turing as you can see in GPU stats in this very thread. (dominated by nV users)
Same way as nobody was forced to buy Radeon VII with 4 stacks of HBM2 totaling 16GB VRAM on 7nm meant mainly for productivity.
1TFLOP of FP32 per 1B transistor is not exactly bad considering release date. And in this nVidia did outmatched it now with Ampere.
Gaming wise it was dead upon arrival. So that's likely main reason why people do not bring it into gaming comparisons. I do not expect you to bring CDNA with same argument later.
Or Threadripper CPUs meant for similar segment into gaming comparisons. Sure, one can be interested in seeing how they do, but saying that productivity silicon is flop for gaming... bit too much.
And as for profit margin. That's another incorrectly read information. They got back more money for same investment per sold card than before. They simply asked too much and sales did not make sufficient returns to offset R&D + operations + ... in comparison to previous results. (Still damn high profit.)
They could have made 10% more by selling much higher volume at much nicer price. But then they would sacrifice future sales in process. It was all calculated and nVidia knew exactly what they were doing.