Newegg is listing Radeon RX 6700 XT, 6800 XT and 6900 XT specs in its blog

Published by

Click here to post a comment for Newegg is listing Radeon RX 6700 XT, 6800 XT and 6900 XT specs in its blog on our message forum
data/avatar/default/avatar31.webp
if we do a little math 6700XT has the same SP count as 5700XT but with smaller ram bus bandwidth, but there is no logic in releasing the next gen card with less performance than the current gen card, so the architecture changes should/must compensate for this difference, so if we assume that 6700XT has the same performance as 5700XT (which might be logical given that the 6700XT will have RT while 5700XT doesn't) comparing the 6700XT to the 6900XT. we are seeing double the SP count and 33% increase of ram bandwidth, this should roughly translate to 70% performance over the 6700XT/5700XT this will put the 6900XT directly against the 3080 in the WQHD res. the above logic might make sense if these specs are correct, which I highly doubt it, I believe as the the guys said, these are just place holders.
https://forums.guru3d.com/data/avatars/m/79/79740.jpg
itpro:

Nvidia fanboys praying all over the net for rtx3080 beating 6900xt are gonna be harshly disappointed. Show some open mind to competition for once, your ego is outrageous. If it was in your hands you wouldn't allow 6900 to beat even 2080ti. Blindfolded kids are everywhere at this planet and age.
Dont be silly. I think most Nvidia users are really hoping that AMD delivers a kick ass card that beats the 3080. An even playing field restores sensible pricing which obviously benefits all. I would love for AMD to do to Nvidia what they had done to Intel.
https://forums.guru3d.com/data/avatars/m/147/147322.jpg
MonstroMart:

Why? Outside of the drivers issues the 5700XT did not disappoint at all. I mean it's very damn close to a 2700 Super and in Canada it's actually less expensive than the 2060 Super. If you are looking for a 300-500$ GPU upgrade then there's no reason to expect AMD to disappoint when looking at the 5700XT unless the drivers issues are back but for now my 5700XT has been stable since last spring.
Hehe, but I didn't say exactly who will be disappointed 🙂 it just you can't please everyone ;-)
https://forums.guru3d.com/data/avatars/m/267/267153.jpg
Anyway.. Bring it on. Still better to have these cards than the malfunctioning 10gb 3080 which you cant even buy. I need AMD to release this stuff already. I will not buy a nVidia card no matter what.
https://forums.guru3d.com/data/avatars/m/198/198862.jpg
SpajdrEX:

Hehe, but I didn't say exactly who will be disappointed 🙂 it just you can't please everyone ;-)
Yeah if its broken piece of hardware like the 3080 many will be disappointed.
data/avatar/default/avatar23.webp
half_empty_soul:

sadly it doesn't look promising, max rtx 3070 performance for 6900x
bernek:

I'm quite sad about these "news". AMD wont be able to beat NVIDIA and will be 1 generation behind like most of the times. So expect a real competitor for 2080TI and maybe 10-15% increase here and there... if the prices are right (and I think they will) my next videocard will be AMD again.
How? 6900XT is TWICE the size and with 30% higher clock, than the 5700XT!
https://forums.guru3d.com/data/avatars/m/232/232349.jpg
OH I AM "VERY EXCITE!!!!"
data/avatar/default/avatar37.webp
Abd0:

if we do a little math 6700XT has the same SP count as 5700XT but with smaller ram bus bandwidth, but there is no logic in releasing the next gen card with less performance than the current gen card, so the architecture changes should/must compensate for this difference, so if we assume that 6700XT has the same performance as 5700XT (which might be logical given that the 6700XT will have RT while 5700XT doesn't) comparing the 6700XT to the 6900XT. we are seeing double the SP count and 33% increase of ram bandwidth, this should roughly translate to 70% performance over the 6700XT/5700XT this will put the 6900XT directly against the 3080 in the WQHD res. the above logic might make sense if these specs are correct, which I highly doubt it, I believe as the the guys said, these are just place holders.
You forgot +30% higher clock speeds for the 6900XT and +45% for the 6700/6800 over the 5700XT
https://forums.guru3d.com/data/avatars/m/278/278874.jpg
deusex:

So......according to those specs AMD won't be able to even beat 3080? Ouch.
How can we judge the specs, they are just raw numbers ? Ampere proved that this doesn't mean anything all in all, this could go one way or the other. Look at the 3080, it got double the number of cuda core of the 2080Ti, a slightly better clock and yet it manage to beat the 2080Ti by only 30%. Which suggest something is bottlenecking all this amount of transistor (maybe games too aren't prepared for this ???) The jump is huge from a GTX 1080 perspective, but it got more than 3 times the cuda core and still manage to achieve "only" 100/110% better performance. Which sounds odd to me. I don't expect it to scale 100% (from the cuda core number), but it is scaling less than expected and we're not talking about SLI stuff here, it's all in one card. It should scale better than this. No ? The same could be said comparing it to the RTX 2080Ti, double the cuda core and yet 30% better "only". Was expecting more than that. It could be node/power related of course, but still not scaling as expected. Sometimes big numbers mean nothing
data/avatar/default/avatar17.webp
Lol at people saying 'is not even winning over the 3080' there is nothing else to win out there. The 3090 is a placeholder for rare cases is not a product you should compare with. Whatever this cards needs to win against, they just need a good pricing for the category they will fall into.
https://forums.guru3d.com/data/avatars/m/270/270008.jpg
deusex:

So......according to those specs AMD won't be able to even beat 3080? Ouch.
Honestly knowing how high AMD is going to clock these(going off the PS5) the top card should be around 3080 speed.
https://forums.guru3d.com/data/avatars/m/278/278016.jpg
Navi 21: 80 CUs, 2.2 GHz boost clock and 22.5 TFLOPs of compute Navi 22: 40 CUs, 2.5 GHz boost clock and 12.8 TFLOPs of compute
Screenshot 2020-09-27 221742.png

Screenshot 2020-09-27 221925.png

images.jpg
do the math boys 🙂
https://forums.guru3d.com/data/avatars/m/282/282600.jpg
Not only are those specs wrong I'm unsure why there are so many people thinking a base clock means anything anymore. AMD base of 1550/1600 last gen was reaching around 1900, sometimes more. As for the memory, unless AMD have a trick which is more than just cache based then I highly doubt they will release their mid tier card with 6GB. I would be interested to know if AMD will use a cache to bypass throughput needs but the only reason they would be going that route is for what's next, which is infinity fabric based GPU's, you would need a cache with them. It would be funny if the 80cu is just 2 x 40cu and they bridge it with infinity fabric and use a unifying cache for latency purposes
https://forums.guru3d.com/data/avatars/m/212/212018.jpg
I want to believe
https://forums.guru3d.com/data/avatars/m/63/63215.jpg
ACEB:

As for the memory, unless AMD have a trick which is more than just cache based then I highly doubt they will release their mid tier card with 6GB.
I'm surprised none of the AMD fans have jumped on this. If this was an Nvidia rumour...
https://forums.guru3d.com/data/avatars/m/90/90026.jpg
Y, I want to believe too. What pushed me lately to Nvidia was simple. Had radeon7, had its flaws, but mostly was non problematic. Driver works = all works. Driver fails(ie some blackscreens, OC fail) back to previous, wait for new. Latest I was using was some WSL version, very good. But. But for some freaking reason, they introduced power saving feature, that I could not pass/change/disable...so, going trough asteroid belt in Elite Dangerous, my FPS drops from 200 area to 35, while freaking GPU usage was 30%, and clocks were like 800-1100Mhz area, where normal value was 90-100% and 1700-1900Mhz, anywhere else, expect damn asteroids fields...no mater was empty, crowded, whatever.
data/avatar/default/avatar17.webp
GREGIX:

Y, I want to believe too. What pushed me lately to Nvidia was simple. Had radeon7, had its flaws, but mostly was non problematic. Driver works = all works. Driver fails(ie some blackscreens, OC fail) back to previous, wait for new. Latest I was using was some WSL version, very good. But. But for some freaking reason, they introduced power saving feature, that I could not pass/change/disable...so, going trough asteroid belt in Elite Dangerous, my FPS drops from 200 area to 35, while freaking GPU usage was 30%, and clocks were like 800-1100Mhz area, where normal value was 90-100% and 1700-1900Mhz, anywhere else, expect damn asteroids fields...no mater was empty, crowded, whatever.
RVII and Vega on ESO, Elite and few others needed to set the P7 state as the minimum speed one on custom game profile. Otherwise they operate at minimum speed.
https://forums.guru3d.com/data/avatars/m/169/169351.jpg
GREGIX:

Y, I want to believe too. What pushed me lately to Nvidia was simple. Had radeon7, had its flaws, but mostly was non problematic. Driver works = all works. Driver fails(ie some blackscreens, OC fail) back to previous, wait for new. Latest I was using was some WSL version, very good. But. But for some freaking reason, they introduced power saving feature, that I could not pass/change/disable...so, going trough asteroid belt in Elite Dangerous, my FPS drops from 200 area to 35, while freaking GPU usage was 30%, and clocks were like 800-1100Mhz area, where normal value was 90-100% and 1700-1900Mhz, anywhere else, expect damn asteroids fields...no mater was empty, crowded, whatever.
Do you use hotas with Elite? Sounds like an issue I have.... Chill doesn't detect hotas as an input so downclocks performance (chill only recognises mouse, keyboard and xinput). Way around it is to set min fps to the same as max fps in Chill
Fediuld:

RVII and Vega on ESO, Elite and few others needed to set the P7 state as the minimum speed one on custom game profile. Otherwise they operate at minimum speed.
And that too
https://forums.guru3d.com/data/avatars/m/165/165018.jpg
At this point it all comes down to how well ray tracing works on them.
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
there has been a lot of (predictable) fanboy response. i own both AMD and Nvidia cards and stock. the Nvidia Ampere series is amazing for what it can do with uArch and older production nodes. but because of the massive die size Nvidia has (and will continue to have) lower yields which keep production and retail prices high. which for the "price no object" guys is just fine. AMD on the other hand has a lower cost of production at a smaller node which increases yield and lowers price. to be anywhere in the discussion with the top tier of Nvidia is a massive win for AMD and the consumer. my opinion is the 6900xt will end up being equivalent with the 3080, but at a drastically lower price (although expensive for AMD) with better supply. the 3090 will continue as a halo product in short supply and low sales until the economy improves. the problem will continue with bit miners, which ironically may be a lifeline for some AIB partners (not being stuck with inventory). but this is a difficult time in RL for most of the world. demand is dramatically lower at a really bad time for AIB partners. early adopters and professional gamers will look sleek and deluxe but way more people will desire these (incl AMD) but hold off for better times.