AMD EPYC-CPUs based on 3D V-Cache and teases Zen 4 as well as 128-core Bergamo

Published by

Click here to post a comment for AMD EPYC-CPUs based on 3D V-Cache and teases Zen 4 as well as 128-core Bergamo on our message forum
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
It's probably due to the manufacturing capacity shortage that desktop Zen 3 models with 3D cache haven't been released already. The best time would have been in October, or even in September, to undermine Alder Lake's release.
data/avatar/default/avatar11.webp
If AMD dropped the Vcache bomb right now, they would have killed Intel outright and shoot their stock over 200 USD per share!
https://forums.guru3d.com/data/avatars/m/258/258589.jpg
Congrats Hilbert, your article got in the TechLinked video. Right at the beginning. [youtube=aKxelKMR3SI] 🙂
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Kaarme:

It's probably due to the manufacturing capacity shortage that desktop Zen 3 models with 3D cache haven't been released already. The best time would have been in October, or even in September, to undermine Alder Lake's release.
Assuming the 3D cache is enough to give them a consistent edge over Intel, I think they should have released the desktop CPUs first. Sure, it'd have been less profitable up-front since the desktop market has relatively small margins, but brand impressions would have been a lot stronger, which helps in the long run.
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
i don't disagree with anyone (yet LOL) on topic. but imho, i think AMD is doing exactly the right thing. sure, i would've snapped up a v-cached 5900/5950. but the simple fact is the PC market cannot afford what Enterprise can. the reason these are farmed out to Enterprise first is because of the profit margin which is far beyond what retail PC world can or would support. and let's face it, while a lot of cpu's end up in (light) enterprise the scale of (cloud and heavy) Enterprise enables greater performance downstream as well as underwriting development costs. we have all seen (esp. on streaming platforms) vast performance increases from using Epyc. i can now go and visit my mother in outer wine country and stream 4k w/o the irritation of endless buffering... and what buffering that remains is from the boondocks ISP. and as many ISP's are using EPYC as well that also is being improved on a regular basis. and whatever you think of game streaming, it wouldn't exist at all without Epyc.
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
tunejunky:

i don't disagree with anyone (yet LOL) on topic. but imho, i think AMD is doing exactly the right thing. sure, i would've snapped up a v-cached 5900/5950. but the simple fact is the PC market cannot afford what Enterprise can. the reason these are farmed out to Enterprise first is because of the profit margin which is far beyond what retail PC world can or would support. and let's face it, while a lot of cpu's end up in (light) enterprise the scale of (cloud and heavy) Enterprise enables greater performance downstream as well as underwriting development costs. we have all seen (esp. on streaming platforms) vast performance increases from using Epyc. i can now go and visit my mother in outer wine country and stream 4k w/o the irritation of endless buffering... and what buffering that remains is from the boondocks ISP. and as many ISP's are using EPYC as well that also is being improved on a regular basis. and whatever you think of game streaming, it wouldn't exist at all without Epyc.
Yeah. But if there was a lot of fab capacity to buy, AMD could have done both. I'm quite sure they would have if they hadn't needed to make a choice. Private consumers aren't going to buy EPYCs (even if ultimately they are served by those running EPYCs for digital services), so the only choice is to sell people mobile, desktop, and HEDT CPUs.
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
yeah but... AMD is not Intel and doesn't have the resources (including fab prioritization) that Intel has so AMD has to go to the deepest pockets first.
https://forums.guru3d.com/data/avatars/m/277/277212.jpg
These new EPYC processors sound great for what I'm doing. The 24-core EPYC processor, 74F3, is the fastest we've seen on my application, beating out 32-core Threadrippers by a significant margin. Another step forward in performance will be welcome. Unfortunately, these things have become unobtanium. We were supposed to get three new systems in September but those were delayed and now they can't even estimate a delivery date. They have A100 GPUs in them and those are actually available.
data/avatar/default/avatar18.webp
Valken:

If AMD dropped the Vcache bomb right now, they would have killed Intel outright and shoot their stock over 200 USD per share!
3D cache has narrow use case, it helps in some games but in most it doesn't, also it wont help with single core. ADL will rule until zen4 comes out by by that time Intel 13th gen will be out
https://forums.guru3d.com/data/avatars/m/266/266726.jpg
MegaFalloutFan:

3D cache has narrow use case, it helps in some games but in most it doesn't, also it wont help with single core. ADL will rule until zen4 comes out by by that time Intel 13th gen will be out
zen3 does get a sizable boost to ipc going from 16mb to 32mb (apu vs chiplet), it will improve single threaded to some degree even in the worst case, cinebench isn't exactly latency or memory sensitive , and the increase results in a 4% gain in IPC, I think +5% overall is a reasonable expectation for adding another 64mb to the onboard 32mb since performance probably won't increase linearly. https://www.guru3d.com/index.php?ct=articles&action=file&id=73832
https://forums.guru3d.com/data/avatars/m/277/277212.jpg
MegaFalloutFan:

3D cache has narrow use case, it helps in some games but in most it doesn't, also it wont help with single core. ADL will rule until zen4 comes out by by that time Intel 13th gen will be out
I disagree. It depends entirely on the program used and the nature and scope of its memory accesses. If it is a game it will likely be GPU bound so cache will not help very much, as long higher-than-HD resolution is used. For one like MS FlightSim I could imagine cache making a significant difference since there is so much data managed in that game. For other non-game apps, like rendering, cache can make a huge difference especially with larger scene models. Cache is why my program sees so much higher performance with EPYCs. They have enormous caches. FWIW, my program would be classified as Industrial HPC.
data/avatar/default/avatar10.webp
MegaFalloutFan:

3D cache has narrow use case, it helps in some games but in most it doesn't, also it wont help with single core. ADL will rule until zen4 comes out by by that time Intel 13th gen will be out
Anything CPU limited is going to lick up and eat that vcache like a juicy peach! Emulators, strategy games, FPU intensive, Civ, ARMA (my no 1 use case), retro games like source ports, and etc... I suspect RTRT games will also use it as there are codepaths that offset RT setup to the CPU. You can see CPU load increase when RT is enabled vs OFF.
https://forums.guru3d.com/data/avatars/m/287/287887.jpg
MegaFalloutFan:

3D cache has narrow use case, it helps in some games but in most it doesn't, also it wont help with single core. ADL will rule until zen4 comes out by by that time, Intel 13th gen will be out
It’s hard to understand the mindset of a company bootlicker and when did you get a crystal ball to see into the future.
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
Valken:

Anything CPU limited is going to lick up and eat that vcache like a juicy peach! Emulators, strategy games, FPU intensive, Civ, ARMA (my no 1 use case), retro games like source ports, and etc... I suspect RTRT games will also use it as there are codepaths that offset RT setup to the CPU. You can see CPU load increase when RT is enabled vs OFF.
ab-so-frickin-lutely
https://forums.guru3d.com/data/avatars/m/270/270008.jpg
Valken:

If AMD dropped the Vcache bomb right now, they would have killed Intel outright and shoot their stock over 200 USD per share!
They did you just didn't notice. They already have v-cache CPU's in Microsoft's Azure data centers as in you can get one in the Azure cloud today. Then Facebook or meta or whatever said they are going to use AMD. This was the power move vs selling these chips to consumers first. These data center chips have much higher margins and the volumes will be very high.
https://forums.guru3d.com/data/avatars/m/270/270008.jpg
Valken:

Anything CPU limited is going to lick up and eat that vcache like a juicy peach! Emulators, strategy games, FPU intensive, Civ, ARMA (my no 1 use case), retro games like source ports, and etc... I suspect RTRT games will also use it as there are codepaths that offset RT setup to the CPU. You can see CPU load increase when RT is enabled vs OFF.
Some fringe cases will be stupid like 50% improvements if the main workload fits in the L3 cache.
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
JamesSneed:

They did you just didn't notice. They already have v-cache CPU's in Microsoft's Azure data centers as in you can get one in the Azure cloud today. Then Facebook or meta or whatever said they are going to use AMD. This was the power move vs selling these chips to consumers first. This data center chips have much higher margins and the v9lumes will be very high.
not only that but they (the clients) have had input on the design parameters. M$, Google, Meta/FB, Netflix, and Disney all have had engineers working with AMD from before the tape-out. this is truly custom silicon for the widest (cloud/enterprise) market. yeah Intel "consults" clients but doesn't offer the result as anything but custom "bespoke" silicon for a higher price.
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
FWIW, these new EPYC procs are powering all new IBM Cloud baremetal servers.
data/avatar/default/avatar35.webp
Valken:

Anything CPU limited is going to lick up and eat that vcache like a juicy peach! Emulators, strategy games, FPU intensive, Civ, ARMA (my no 1 use case), retro games like source ports, and etc... I suspect RTRT games will also use it as there are codepaths that offset RT setup to the CPU. You can see CPU load increase when RT is enabled vs OFF.
Its not the same thing but its close enough, go check Intel Broadwell 5775C Benchmarks vs otehr CPUs, it had 128MB of Cache used for iGPU, BUT when you disabled iGPU, all that cache was used toas L4 CPU cache Yes it was winning in some fringe cases by a lot, but on the general note it wasn't mind blowing.
https://forums.guru3d.com/data/avatars/m/266/266726.jpg
MegaFalloutFan:

Its not the same thing but its close enough, go check Intel Broadwell 5775C Benchmarks vs otehr CPUs, it had 128MB of Cache used for iGPU, BUT when you disabled iGPU, all that cache was used toas L4 CPU cache Yes it was winning in some fringe cases by a lot, but on the general note it wasn't mind blowing.
Its important to keep in mind that how the cache effects performance depends on the architecture itself, zen3 is designed with the l3 as a victim cache, and presumably with 3d cache in mind. the l4 cache on the broadwell chips is also a victim cache, but it isn't quite the same , its not that fast ,only about 50gb/s when used for the cpu and latency isn't that great either, alot faster than ddr3 , but still much much slower than the l3 cache. [SPOILER]https://www.guru3d.com/index.php?ct=articles&action=file&id=17195 [/SPOILER]