Tensor Core equivalent Likely to Get Embedded in AMD rDNA3

Published by

Click here to post a comment for Tensor Core equivalent Likely to Get Embedded in AMD rDNA3 on our message forum
https://forums.guru3d.com/data/avatars/m/275/275921.jpg
So AMD GPU's will be like Nvidia GPUS then. Cool.
https://forums.guru3d.com/data/avatars/m/284/284177.jpg
AuerX:

So AMD GPU's will be like Nvidia GPUS then. Cool.
AMD was always good at gaming. I would dare to say the best budget card in history would be the HD 5770....that card sold like hotcakes with extra syrup!
https://forums.guru3d.com/data/avatars/m/232/232504.jpg
pegasus1:

and its all Greek to me
Have you ever wondered which is the equilavent phrase for us, Greeks? No? I'll tell you anyway. "It's all Chinese to me" hehe!
https://forums.guru3d.com/data/avatars/m/246/246088.jpg
NiColaoS:

Have you ever wondered which is the equilavent phrase for us, Greeks? No? I'll tell you anyway. "It's all Chinese to me" hehe!
Ha ha ha, il remember that, il be in Cyprus tomorrow.
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
TimmyP:

Shoulda been here 2 gens ago. Zero excuses.
So Vega 56 and 64 ? You are aware those came out a bit before rtx 2xxx series right ?
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
Airbud:

I would dare to say the best budget card in history would be the HD 5770....that card sold like hotcakes with extra syrup!
I had two of them... until I learned that crossfire doesn't combine memory pools 😀
https://forums.guru3d.com/data/avatars/m/55/55855.jpg
so they've killed off FSR 2.0 already.
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
Airbud:

I would dare to say the best budget card in history would be the HD 5770....that card sold like hotcakes with extra syrup!
hmm, the 5770 was in an interesting space, a feature level 11 card at a point in time where nvidia had none so it had basically no competition feature wise at the time, When competition did arrive, it sat between the GTS 450 and GTX 460, the former being 10 dollars cheaper but offering only 84% of the performance of the 5770 (where not using tesselation). The latter was a fairbit faster but came with additional cost since it had twice as many memory modules.
https://forums.guru3d.com/data/avatars/m/246/246088.jpg
Astyanax:

it sat between the GTS 450 and GTX 460.
I skipped the 4** series, went from a 8800gtx to 280gtx to the 580gtx.
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
pegasus1:

I skipped the 4** series, went from a 8800gtx to 280gtx to the 580gtx.
technically you didn't skip the 4** series since the 500 series was just a tweak to it.
data/avatar/default/avatar27.webp
pharma:

Into the GPU Chiplet Era: An Interview With AMD's Sam Naffziger June 24, 2022 So looks like DP4a might be enough for AMD's consumer graphics, albeit slower than using tensor or matrix cores.
I am really not a fan of this. Ultimately, this approach means AMD cards would offer much less value to the customer compared to Nvidia. With Nvidia, you basically get the same architecture that supercomputers use in your home for a similar price point that you can use for all kinds of different purposes, for example training ML models, use DLSS, use really handy AI features in content creation software, accelerate Blender, noise cancellation, AI effects in video conferencing and many more. You're not getting any of that with an AMD GPU because it lacks ML acceleration as it's not a supercomputer architecture, but is dumbed down for gaming. So with AMD you basically get a GPU that is good at gaming but nothing else, while with Nvidia you get a GPU that you can do anything with and at a high performance. Until AMD steps up their game by adding proper ML acceleration and is also competitive in software, I will always choose Nvidia because I get much more value for my money there.
https://forums.guru3d.com/data/avatars/m/246/246088.jpg
Astyanax:

technically you didn't skip the 4** series since the 500 series was just a tweak to it.
You know what i mean, the 580 was a serious leap from the 480. Ok maybe not the same leap that the 8800gtx was over the 7800gtx but a leap non the less.
https://forums.guru3d.com/data/avatars/m/198/198862.jpg
pegasus1:

You know what i mean, the 580 was a serious leap from the 480. Ok maybe not the same leap that the 8800gtx was over the 7800gtx but a leap non the less.
It was on the same arch and it has same 1.5gb vram. Not until kepler nvidia actually didnt had a leap in perfromance and amd had legendary tahiti. 🙂
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
Undying:

It was on the same arch and it has same 1.5gb vram. Not until kepler nvidia actually didnt had a leap in perfromance and amd had legendary tahiti. 🙂
we can argue the differences between gf110 and gf100 if we want, i think it was texture samplers ported from gf104 and full rate fp16?
https://forums.guru3d.com/data/avatars/m/246/246088.jpg
Undying:

It was on the same arch and it has same 1.5gb vram. Not until kepler nvidia actually didnt had a leap in performance and amd had legendary tahiti. 🙂
Many of you guys are very technically knowledgeable, im not, i just judge things by heat, performance, OCability etc, i dont get into the weeds on the technical side to be honest. I dont even know what names any particular CPU or GPU core has, but ive built dozens of custom WCed rigs, fabricated panels and modded cases etc. I was an aircraft engineer before i decided joining the Army was way sexier.
https://forums.guru3d.com/data/avatars/m/251/251189.jpg
pharma:

So looks like DP4a might be enough for AMD's consumer graphics, albeit slower than using tensor or matrix cores.
Question is if this would work out with the computational complexity of e.g. DLSS. Afair it puts quite some load on Turing's/Ampere's TCs.
data/avatar/default/avatar33.webp
dampflokfreund:

With Nvidia, you basically get the same architecture that supercomputers use in your home for a similar price point that you can use for all kinds of different purposes, for example training ML models, use DLSS, use really handy AI features in content creation software, accelerate Blender, noise cancellation, AI effects in video conferencing and many more.
AMD did use that approach, up until RDNA - arguably RDNA 2. GCN had excellent compute performance and did better than Nvidia's (gaming) offerings in many specialized tasks due to being essentially the same cards as their pro lines. Unfortunately that also meant the cards weren't as ideally suited for actual gaming as they could be, considering heat/power and die size. So we got RDNA and CDNA. I, too, like the idea of 'fully enabled' gaming graphics cards - in theory. In practice I'll gladly sacrifice the features that I, who primarily watch YouTube and argue with strangers on the web game with my card, don't have much use for to get more performance in my primary usage scenario. And here we are. Once we've established what new features are actually, long-term, usable for gaming those will trickle down regardless. A good bet is to keep a look out for what gets introduced on the consoles.
https://forums.guru3d.com/data/avatars/m/63/63215.jpg
Once again Nvidia paves the way and AMD follows. It's all good for us consumers as in the long run we will have parity.
https://forums.guru3d.com/data/avatars/m/246/246088.jpg
Stormyandcold:

Once again Nvidia paves the way and AMD follows. It's all good for us consumers as in the long run we will have parity.
Lets wait until the new cards are out before we decide if NV paved or led the way.
https://forums.guru3d.com/data/avatars/m/63/63215.jpg
pegasus1:

Lets wait until the new cards are out before we decide if NV paved or led the way.
They definitely paved the way. Otherwise why would AMD have Tensor math cores? This part is already decided. Actual performance will decide who led.