New AMD Radeon graphics card PCB photos with GDDR6 memory - NAVI Spotted?

Published by

Click here to post a comment for New AMD Radeon graphics card PCB photos with GDDR6 memory - NAVI Spotted? on our message forum
https://forums.guru3d.com/data/avatars/m/267/267153.jpg
Dude learn some manners 😀 One would say... that escalated quickly .)
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
Here's the thing: Fury X has double memory bandwidth of RX-580, but is only around 20% faster. Radeon 7 has double memory bandwidth of Vega 64 and is again only around 20% faster. Now, 256bit GDDR6... depending on clock it will be up to 512GB/s theoretical maximum bandwidth, which would be double of RX-580 and theoretically enable double performance... if GPU is strong enough to actually use this bandwidth. More realistically memory clock will be lower and bandwidth will be around 420~460GB/s as AMD is probably not aiming for most expensive memory chips. - - - - Is this Navi card aiming at 1.7~2x performance of RX-580? Maybe, but there is quite some chance that it will be only 1.5times as fast. And AMD simply designed both GPU and PCB from experience with Polaris where final cards have good 40% higher clock, but are held back by memory bandwidth limitation. And in best case scenario this card is going to have better IMC, caches in "CU"s and improved memory command querying... Enabling 10~20% higher achievable GPU performance at same bandwidth as Polaris. - - - - My guess from this is that: - Worst case scenario: Vega 56 performance (Unlikely as it would require regression in technology or small GPU with only 6~7B transistors... too cheap GPU with expensive memory.) - Expected performance: RTX 2070 +-5% (Very likely) - Optimistic situation: RTX 2080 +-5% (Unlikely) - Great Design choices but higher price: RTX 2080 +10~15% (Highly unlikely) - - - - This PCB has basically RTX 2070/2080 memory bandwidth potential. But I do not exactly see AMD planning for GPU with more than 12 Billion transistors as they likely want those cards to have price impact. It may be possibly smaller GPU like 8.5~9B transistors and utilize higher clock (~1.7GHz), Which would be matching performance improvement from RX-580 to RTX 2070.
https://forums.guru3d.com/data/avatars/m/267/267787.jpg
Clawedge:

http://i.imgur.com/JQw5TgG.png I know its just the PCB, but the pressure has been building up for so long, it just came.....out.
Lmao! Best post I have ever seen on the Guru3d forums!
https://forums.guru3d.com/data/avatars/m/275/275145.jpg
I wanted to edit, but I accidentally deleted my previous post. 😕 Regarding the RTX line, it depends, RTX 2060 for example has similar performance to the Vega 64 and also have similar prices. It's not Nvidia's fault that with heavily cut down chips, it can perform equal or better than AMD cards. At most, that says a lot about AMD's inability in recent years. Nvidia being the dominant brand on the market, has no interest at this point in being disruptive in pricing.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
DW75:

Also, lets add more to this. The RTX cards are using step down chips or cut down chips across most of the lineup. The GTX 1650 is using a TU117 core. This is an ultra low end piece of trash that should be used in an XX30 card, yet it is used in a card costing 150-180 US. The GTX 1660 and GTX 1660 Ti are using a TU116, which is a low end chip which should be used in XX50 level cards. Nvidia is charging midrange prices for these two cards. The RTX 2060 is actually using a TU106, yet it is a heavily cut down core. The RTX 2070 is using a fully enabled TU106, yet Nvidia decides to charge double the cost for it when compared to the previous gen GP106 used in the GTX 1060. The RTX 2080 has a TU104, yet this is also cut down, and they are charging more once again. Of course, lets not forget the RTX 2080 Ti which costs at least 400 more than it should for the performance it offers. Yes, the entire lineup is a complete rip off, and so is the Radeon VII.
It is just name. And therefore not exactly objective. Yes nVidia seemingly assigned lower tier chip to higher tier card and asked higher price, but here is some other way to look at it... Number of transistors card in given tier has per $ (MSRP): Old.Gen => New.Gen : Old.TransPer$[Millions] => New.TransPer$[Millions] 1060 => 2060 : 14.72 => 30.95 1070 => 2070 : 19 => 21.64 1080 => 2080 : 12 => 19.46 1080Ti => 2080Ti : 16.88 => 18.62 ... There are apparently cut down chips which skews down perspective a bit, but in each performance tier nVidia delivered bit more transistors per $ they ask for final card. Yes, they are mostly invested into new features from which some are still vaporware, some proven to be worse than expected, and some are borderline marketing tool. But here is normalization of Full chips as both lineups have quite some cut down variants filling ranks: 1060 => 2070 : 14.72 => 21.64 1080 => 2080 : 12 => 19.46 1080Ti => 2080Ti : 16.88 => 18.62 And here are full chips of similar transistor count... normalized to same transistor count and multiplier shows how many more transistors new gen delivers per $: 1060 => 1650Ti : 1.56x (If 1650Ti has $219 MSRP) 1080 => 1660Ti : 1.81x 1080Ti => 2070 : 1.17x (Yes, 1080Ti transistor count is closet to 2070 than to 2080.) Yes, people here do not want 1660Ti instead of 1080 as later is 20% faster... where did all those TMU/ROPs go on same sized GPU? Into FP16 capability like in Vega. Normalizing numbers above even to performance: 1080 => 1660Ti : 1.64x 1080Ti => 2070 : 1.19x I would say that from nVidia's technological and economical point of view, they delivered more and cheaper. But from our side, we really do not care about nVidia wasting transistors as AMD did. Clients want to have upgrade or at least side grade which would be quite cheaper. Clients want higher performance per $ without having to downgrade. And that did not happen. In this metric, stock 2070 has mere 2% higher performance per transistor than 1080Ti. 2070 has lower TDP and OCed delivers more, but there are 2 higher Full GPUs above it which have 215W and 250W TDP. (And invading their power draw range out of box would make 2070 unattractive from this perspective.) And 1660Ti would be downright downgrade in terms of performance per transistor as it is 16% slower than 1080, but has only 8% fewer transistors. (That's around 10% higher performance per transistor on 1080 side. But 1080 gain ate like 50% more power... not Turing improvement, but manufacturing technology.) = = = = TL;DR: Turing is improvement on technological level over Pascal. Those cards do deliver better performance per watt. Better performance per $. They are not as attractive due to not very good performance per transistor which kind of remained same while nVidia added 2 bigger GPUs above 1080Ti. This resulted in bigger difference in target TDPs over lineup against older cards with same transistor count. Which is on one hand good, but gets to show that we really do not care that much about power draw outside of "AMD vs. nVidia" arguments.
data/avatar/default/avatar28.webp
If you need any more nails in the coffin of AdoredTV's fake "leaks", here they are. If the supposed "Radeon RX 3080" has a TDP of 150W, why would it have two 8 pins connectors and have VRMs on par with the reference Vega? It wouldn't.
data/avatar/default/avatar26.webp
Looks like an engineering/debugging board to me, it should have a lot of extra unneeded stuff for production.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
Yakk:

Looks like an engineering/debugging board to me, it should have a lot of extra unneeded stuff for production.
Or it can be really just reference board for top 3 models. Top model may be like 250W. Then weaker model 205W, and weakest model on this PCB may be 160W. Those may be based on same chip, just cut down for lower variants. And in similar way memory may be 6/7GB on lower models. In similar fashion as nVidia has unused memory places on PCBs. And some VRMs may remain unused on lower models too. There are many possibilities. And with all the unknowns, we do not really know what is AMD cooking. There seems to be somehow unsuccessful tapeout at end of last year. But by now they had enough time to get things in order and do 2 more. We do not even know if it is still on older 7nm or on newer one. I personally like to speculate, but only for fun of thinking over all the variables which are unknown and where they may end. But for myself, I know that even information provided by AMD themselves may be outdated. And plans may have changed as now we are much closer to PS-next release with something like 10~14 TFLOPs GPU. Now with date set to 7th of July, we may get to see many more GPUs than AMD could have introduced in January.
https://forums.guru3d.com/data/avatars/m/196/196284.jpg
I'm tired of leaks, rumors and speculation.....give me the card already...
https://forums.guru3d.com/data/avatars/m/236/236670.jpg
sykozis:

I'm tired of leaks, rumors and speculation.....give me the card already...
you just bought a new card!...how many you need?....:p
data/avatar/default/avatar11.webp
Fox2232:

There are many possibilities. And with all the unknowns, we do not really know what is AMD cooking. There seems to be somehow unsuccessful tapeout at end of last year. But by now they had enough time to get things in order and do 2 more. We do not even know if it is still on older 7nm or on newer one.
What are you taking about? There is no evidence that there was supposed to be tape-out at the end of last year.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
mockingbird1:

What are you taking about? There is no evidence that there was supposed to be tape-out at the end of last year.
Read carefully.
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
Cool. Waiting for benchmarks anyway.
https://forums.guru3d.com/data/avatars/m/236/236506.jpg
DW75:

Indeed, the Radeon VII is too expensive also. If you think I am AMD fanboy you are mistaken. I am a fan of fair prices for the performance that is offered. Radeon VII and the entire RTX lineup are a rip off.
Radeon VII is expensive as you say but I wouldn't really call it a rip-off as it's expensive to manufacture. I doubt AMD is making much money from it.
https://forums.guru3d.com/data/avatars/m/198/198862.jpg
Looking at the board RTX2070/2080 performance. Price it at 350$ and it will disrupt the market.
https://forums.guru3d.com/data/avatars/m/156/156133.jpg
Moderator
So if it indeed is GDDR6, that means in Navi they were able to lower the power consumption enough to throw on GDDR6 instead of HBM. That's pretty awesome to look forward to.
https://forums.guru3d.com/data/avatars/m/55/55855.jpg
HybOj:

Dude learn some manners 😀 One would say... that ejaculated quickly .)
:D
https://forums.guru3d.com/data/avatars/m/274/274425.jpg
mockingbird2:

If you need any more nails in the coffin of AdoredTV's fake "leaks", here they are. If the supposed "Radeon RX 3080" has a TDP of 150W, why would it have two 8 pins connectors and have VRMs on par with the reference Vega? It wouldn't.
To start with, I'd like very much for Navi to be successful for AMD, so the zealots can put away their knives, at least for the time being. And, I also have no axe to grind regarding Jim at AdoredTV. *If* this is indeed the PCB for the upcoming allegedly "mid-range" Navi card, I'm more than just a little apprehensive about this product. Keeping in mind that this is a new-from-jump 7nm GPU, I have to agree with mockingbird2 that it sure looks like there is, at least, the provision for one hell of a lot VRM-related paraphernalia, as well as the two 8-pin power connectors. Not even simply one eight-pin and one six-pin, which also could seem a bit much, even when considering the choice of VRAM. If the under-load wattage requirements of whatever chip is destined for this card really is in the 250-275W range, I'm hoping that this truly is the PCB for a workstation card, and most definitely not something intended for consumers/gamers.