Rumor: AMD Radeon Big Navi has 16GB VRAM (NAVI21), and 12GB for NAVI22

Published by

Click here to post a comment for Rumor: AMD Radeon Big Navi has 16GB VRAM (NAVI21), and 12GB for NAVI22 on our message forum
https://forums.guru3d.com/data/avatars/m/186/186805.jpg
oh man.... the wait is killing me.... but at least if we get let down again by AMD it will give Nvidia chance to get the 3080 back in stock lol
https://forums.guru3d.com/data/avatars/m/183/183694.jpg
I'm on my knees praying to the FPS gods to knock some sense into the Radeon division.. they've seem to have gotten it together on the Ryzen front.. Why is it so hard to make decent decisions for the Radeon group also? I mean there is budget now... I'm so afraid we get another 5700XT flop, them saying: "well we are catering to 95% of the market now....." I mean... it better be near or better than 3080 performance or it will really show they don't care about Radeon anymore and just want to sell that Ryzen silicon instead. (which financially would make sense with the limited TSMC capacity). If not now then when? They have the node advantage, they have the marketing and cash flow from Zen. The development support from both Microsoft and Sony... They could do it if they wanted to... building a couple of killer cards outclassing the 3080 by a healthy margin and pricing it competitively... the question is.. do they want to.... The halo products sell the cheaper products.. it's just how it is.. If they care about Radeon as a brand they go for it. For the long term... We will see.. But I'm not overhyping this in any way... Yes I'm waiting for the release and hoping they get back on track, but I have a feeling I might end up with a 20gb 3080 barn heater because lack of alternatives...
https://forums.guru3d.com/data/avatars/m/198/198862.jpg
CPC_RedDawn:

oh man.... the wait is killing me.... but at least if we get let down again by AMD it will give Nvidia chance to get the 3080 back in stock lol
Same here. The damn waiting... Im getting one as soon i can order it unless amd really screw up (hopefuly not) 🙂
data/avatar/default/avatar20.webp
Ohhh man, i can just feel how the 12Gb version mind controls me, i swear it.
https://forums.guru3d.com/data/avatars/m/165/165018.jpg
freakin sweet if true
data/avatar/default/avatar39.webp
They are doing a good job keeping the important info secret. I think enough Nvidia is not sure. I really think they will be near the 3080 lvl.. maybe sneak in a limited number Halo product?
https://forums.guru3d.com/data/avatars/m/242/242471.jpg
Why rehash old 5700XT CU's count? kinda pointless, or are they trying to make it look more appealing now with 12 GB vram ... xD
https://forums.guru3d.com/data/avatars/m/263/263273.jpg
-Tj-:

Why rehash old 5700XT CU's count? kinda pointless, or are they trying to make it look more appealing now with 12 GB vram ... xD
My thoughts exactly. 3080 basically has twice the performance of 5700 XT with 3.4 times the shader cores. Same number of shader cores doesn't make any sense to me. I doubt that this can even compete with 3060/3070. How are they gonna improve performance on the same number of cores? By clocking it at 3000 MHz?
https://forums.guru3d.com/data/avatars/m/186/186805.jpg
Cidious:

I'm on my knees praying to the FPS gods to knock some sense into the Radeon division.. they've seem to have gotten it together on the Ryzen front.. Why is it so hard to make decent decisions for the Radeon group also? I mean there is budget now... I'm so afraid we get another 5700XT flop, them saying: "well we are catering to 95% of the market now....." I mean... it better be near or better than 3080 performance or it will really show they don't care about Radeon anymore and just want to sell that Ryzen silicon instead. (which financially would make sense with the limited TSMC capacity). If not now then when? They have the node advantage, they have the marketing and cash flow from Zen. The development support from both Microsoft and Sony... They could do it if they wanted to... building a couple of killer cards outclassing the 3080 by a healthy margin and pricing it competitively... the question is.. do they want to.... The halo products sell the cheaper products.. it's just how it is.. If they care about Radeon as a brand they go for it. For the long term... We will see.. But I'm not overhyping this in any way... Yes I'm waiting for the release and hoping they get back on track, but I have a feeling I might end up with a 20gb 3080 barn heater because lack of alternatives...
The thing is 5700XT wasn't a flop. It did exactly what AMD wanted it to do. To compete in the mid range with Nvidia and it did compete very well with the RTX2070, forcing Nvidia to release the Super lineup (which was probably coming anyway seen as Turning had a lackluster launch). The 5700XT also set out to prove that AMD could make a competitive gaming architecture again and it proved that as well. The only thing that held back the 5000 series was its pretty bad drivers they could of done with another 6/12 months in the oven but AMD needed those cards out.
https://forums.guru3d.com/data/avatars/m/198/198862.jpg
Embra:

They are doing a good job keeping the important info secret. I think enough Nvidia is not sure. I really think they will be near the 3080 lvl.. maybe sneak in a limited number Halo product?
Yeah its amazing how amd keeping all this as a secret. Its all rumors at this point, it has never been like this. No poor volta crap this time. I hope we are in for surprise.
data/avatar/default/avatar31.webp
Cidious:

I'm so afraid we get another 5700XT flop
The card itself was far from a flop... it was very close to 2070S while costing some 30% less. That's great value. The drivers are (were?) the main issue, although I don't know the real number of users that actually had problems with it. We need to remember that the vast majority of customers live in the £150 ~ £300 segment. This is a forum of enthusiast where high end cards are hugely overrepresented. In the latest steam survey, the 1060 alone has 11.18% market share, while 2080 and 2080ti barely have 1% each. So the fact that AMD can't catch up with the high end, IMHO, isn't that crucial from a financial standpoint. It sucks for those who purchase high end (less competition), but that doesn't mean the 5700XT was a flop. Quite a few reviewers (including Gamers Nexus) considered it the best overall card of 2020.
upload_2020-9-22_20-19-29.png
data/avatar/default/avatar15.webp
The math doesn't work unless Big Navi has either a 512bit or 256 bit memory bus.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Cidious:

If not now then when? They have the node advantage, they have the marketing and cash flow from Zen. The development support from both Microsoft and Sony... They could do it if they wanted to... building a couple of killer cards outclassing the 3080 by a healthy margin and pricing it competitively... the question is.. do they want to.... The halo products sell the cheaper products.. it's just how it is.. If they care about Radeon as a brand they go for it. For the long term... We will see.. But I'm not overhyping this in any way...
Yes, they have the cash flow, now. They didn't when they were first developing this. Major architectural changes like RDNA don't happen overnight. AMD still had debt to pay off and through sheer luck, Intel kept screwing up their 10nm process, making AMD's CPUs more successful than they otherwise would have been. They spent all their resources on Ryzen because if that fell, so would the company as a whole. Meanwhile, companies like MS and Sony were diverting RTG's already finite attention away from polishing these GPUs. Despite all of that, the 5700XT proved to be very promising for performance. Nowadays, it can outperform a 2080 in some benchmarks. The problem is, the Windows drivers were atrocious at launch date. So - considering the sub-par conditions for developing this architecture, I'd say the results seems rather promising so far. I'm not expecting a "3080 killer" (I'm not confident "big Navi" is going to win against it most of the time), but I do expect something to be very competitive, particularly in terms of wattage and price.
Davud:

My thoughts exactly. 3080 basically has twice the performance of 5700 XT with 3.4 times the shader cores. Same number of shader cores doesn't make any sense to me. I doubt that this can even compete with 3060/3070. How are they gonna improve performance on the same number of cores? By clocking it at 3000 MHz?
Navy Flounder doesn't appear to be the flagship, so, what's the problem?
data/avatar/default/avatar12.webp
Davud:

My thoughts exactly. 3080 basically has twice the performance of 5700 XT with 3.4 times the shader cores. Same number of shader cores doesn't make any sense to me. I doubt that this can even compete with 3060/3070. How are they gonna improve performance on the same number of cores? By clocking it at 3000 MHz?
. Who ever wrote the original article completely forgot that RDNA2 doesn't use Stream processors. Also the article sats "IF" forgetting we know how RDNA2 is designed. RDNA 1 https://www.techpowerup.com/img/ALuaZuaKh9IqAKDd.jpg RDNA 2 https://cdn.videocardz.com/1/2020/08/Xbox-Series-X-Slide13.jpg As seen above, completely different architecture. Same applies to RDNA3, is a completely new thing also. Hence why counting TFLOPs is stupid to begin with, when things are radically different. Given the information we have already. We know the 52CU @1.83Ghz (Xbox) is faster than the 2080Ti at 4K native rendering with RT ON (no DLSS). This is found in the MS DX12U video (bellow) We also know that the 36CU @ 2.33Ghz (PS5) is not that far behind than the Xbox X series, 13% tops if that. 40CU card consumer card is smaller than the 52CU found in the Xbox but bigger than the 36CU found in the PS5 by 11%. So if that one works at 2.4Ghz (7NP can do even higher according to SONY) which is not that much faster than the PS5, it should be equal if not faster than the Xbox X series GPU. Which we know is faster on native 4K shader+RT (no DLSS) than the 2080Ti already at least on the Microsoft DX12U videos sample. (I hope picked the correct one, there are 2). [youtube=0sJ_g-aWriQ]
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Fediuld:

. Who ever wrote the original article completely forgot that RDNA2 doesn't use Stream processors. Also the article sats "IF" forgetting we know how RDNA2 is designed. As seen above, completely different architecture. Same applies to RDNA3, is a completely new thing also. Hence why counting TFLOPs is stupid to begin with, when things are radically different. Given the information we have already. We know the 52CU @1.83Ghz (Xbox) is faster than the 2080Ti at 4K native rendering with RT ON (no DLSS). This is found in the MS DX12U video (bellow) We also know that the 36CU @ 2.33Ghz (PS5) is not that far behind than the Xbox X series, 13% tops if that. 40CU card consumer card is smaller than the 52CU found in the Xbox but bigger than the 36CU found in the PS5 by 11%. So if that one works at 2.4Ghz (7NP can do even higher according to SONY) which is not that much faster than the PS5, it should be equal if not faster than the Xbox X series GPU. Which we know is faster on native 4K shader+RT (no DLSS) than the 2080Ti already at least on the Microsoft DX12U videos sample. (I hope picked the correct one, there are 2). [youtube=0sJ_g-aWriQ]
Stream Processor is just a marketing word for ALUs. That being said RDNA2's ALUs are clearly different - dunno whether they'll keep using SPs or change name. They've changed ALU's before (Terascale -> GCN) and kept the name, so who knows. Where in that video does it show the Xbox Series X beating a 2080Ti? I think it will probably come close but I don't see where that video shows it. It shows an example of mesh shader culling performance between the two, but that's not overall performance and it's two different scenes and the Nvidia one looks way more complex so I'd expect it to be slower. I just skimmed through the video though, could be missing something.
https://forums.guru3d.com/data/avatars/m/63/63215.jpg
Denial:

Stream Processor is just a marketing word for ALUs. That being said RDNA2's ALUs are clearly different - dunno whether they'll keep using SPs or change name. They've changed ALU's before (Terascale -> GCN) and kept the name, so who knows. Where in that video does it show the Xbox Series X beating a 2080Ti? I think it will probably come close but I don't see where that video shows it. It shows an example of mesh shader culling performance between the two, but that's not overall performance and it's two different scenes and the Nvidia one looks way more complex so I'd expect it to be slower. I just skimmed through the video though, could be missing something.
Basically, in that Xbox presentation, they end-up focussing on 2 main ways to improve performance over Nvidia PC GPU. It's not a one-size fits all solution, but, good enough to optimise performance for XSX for the monies by attempting to remove some bottlenecks.
data/avatar/default/avatar19.webp
Cidious:

I'm on my knees praying to the FPS gods to knock some sense into the Radeon division.. they've seem to have gotten it together on the Ryzen front.. Why is it so hard to make decent decisions for the Radeon group also? I mean there is budget now... I'm so afraid we get another 5700XT flop, them saying: "well we are catering to 95% of the market now....." I mean... it better be near or better than 3080 performance or it will really show they don't care about Radeon anymore and just want to sell that Ryzen silicon instead. (which financially would make sense with the limited TSMC capacity).
Wanting to, and cashflow isn't really enough alone to ensure they are able to match the best GPU producer on the planet which already have had a good head start for years.. I hope they will be able to catch up this time, but I would not be disapointed with their efforts based on them not making it all the way.. if they just catch up I'll be happy, and if they can match - great! And if they can surpass.. well, just magic.. The 3080 alone is a reason to be a happy computer gamer in the end of 2020.. if AMD can also follow suit the gamer will be the big winner of 2020 🙂 (and this year needs some good news)
data/avatar/default/avatar22.webp
Denial:

Stream Processor is just a marketing word for ALUs. That being said RDNA2's ALUs are clearly different - dunno whether they'll keep using SPs or change name. They've changed ALU's before (Terascale -> GCN) and kept the name, so who knows. Where in that video does it show the Xbox Series X beating a 2080Ti? I think it will probably come close but I don't see where that video shows it. It shows an example of mesh shader culling performance between the two, but that's not overall performance and it's two different scenes and the Nvidia one looks way more complex so I'd expect it to be slower. I just skimmed through the video though, could be missing something.
Where it shows the metrics of the 2080Ti pointing out that the card was working at 1440p while the Xbox metrics were at native 4K. The actual point is on the 27th minute but need to watch the whole thing what he is talking about. There is also another video about similar thing but for RT.
data/avatar/default/avatar26.webp
ITGuru:

F ! Nvidia F! AMD Looks like i will be Sli'ng my existing 2080 ti for 40-60% gain in fps from the dheads that decided to prostitute their 2080 ti's to get the 3080's. Already seen them selling as low as $700 AUD(roughly 500 USD)
You'll get nothing but micro-stutter out of SLI on the tiny handfull of games that it actually works with. I've had dual GPU's going back 12+ years, and Pascal is the last time I will ever do it. The few rando's who think SLI still works are just not observant enough to know what microstutter is.