Up to 96 cores and 12 DDR5 memory channels with AMD Zen4-based server processors.

Published by

Click here to post a comment for Up to 96 cores and 12 DDR5 memory channels with AMD Zen4-based server processors. on our message forum
data/avatar/default/avatar24.webp
I like the idea, 1 PS CPU.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
AMD is reaching a point with these sockets where budget CPUs won't really make sense. I'm sure a 6000+ pin socket is expensive to manufacture, especially when you consider all the gold plating. Imagine having something like an 8-core CPU in one of those boards - seems like it'd be a real waste. The thing is, unless AMD releases AM5 Epycs or continues with LGA4094, there isn't really going to be an option for people who want a low-end server. I would argue servers with 8 cores would still be practical, if what you're looking for is the oodles of PCIe lanes. You don't need 6000 pins for that.
data/avatar/default/avatar15.webp
schmidtbag:

AMD is reaching a point with these sockets where budget CPUs won't really make sense. I'm sure a 6000+ pin socket is expensive to manufacture, especially when you consider all the gold plating. Imagine having something like an 8-core CPU in one of those boards - seems like it'd be a real waste. The thing is, unless AMD releases AM5 Epycs or continues with LGA4094, there isn't really going to be an option for people who want a low-end server. I would argue servers with 8 cores would still be practical, if what you're looking for is the oodles of PCIe lanes. You don't need 6000 pins for that.
You still need pins for 128 lanes of PCIE 4.0, and now we have 50% more DRAM channels on DDR5 (more pins on top of more slots). That is all matched to a big I/O die that does all that, which is the same no matter your core configurations at that point.
data/avatar/default/avatar03.webp
schmidtbag:

AMD is reaching a point with these sockets where budget CPUs won't really make sense. I'm sure a 6000+ pin socket is expensive to manufacture, especially when you consider all the gold plating. Imagine having something like an 8-core CPU in one of those boards - seems like it'd be a real waste. The thing is, unless AMD releases AM5 Epycs or continues with LGA4094, there isn't really going to be an option for people who want a low-end server. I would argue servers with 8 cores would still be practical, if what you're looking for is the oodles of PCIe lanes. You don't need 6000 pins for that.
There are a ton of options for low end servers and it all depends on what you are using it for. This chips looks better suited for supercomputing in a compact environment. My guess is there will be older Epic chips that may be sold or modified to fill in the gap between this and lets say a Threadripper. Virtualization goes a long way in solving dissimilar processors in one environment as a whole if you want a cheap and practical server environment.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Ssateneth:

You still need pins for 128 lanes of PCIE 4.0, and now we have 50% more DRAM channels on DDR5 (more pins on top of more slots). That is all matched to a big I/O die that does all that, which is the same no matter your core configurations at that point.
Ah right I forgot about the extra memory channels; that would make a difference. Though even then... nobody needs 12 channels with 8 cores, or even 16 cores. It's likely that we won't see something below 16 for this socket. But that leads to my original point: AMD won't really have a middle-ground by the time this is released: you're either getting a massive socket meant for huge core counts, you get the Threadrippers (which aren't ideal for many server workloads), or you get a small socket with just barely enough features for a gaming PC.
warezme:

There are a ton of options for low end servers and it all depends on what you are using it for. This chips looks better suited for supercomputing in a compact environment. My guess is there will be older Epic chips that may be sold or modified to fill in the gap between this and lets say a Threadripper. Virtualization goes a long way in solving dissimilar processors in one environment as a whole if you want a cheap and practical server environment.
Currently there are low-end server options, but my point is the motherboards will be built to handle these giant CPUs (it's not just the socket, but all the power delivery stuff too), which is going to drive up the cost of entry-level systems considerably. So unless there will be some motherboards manufactured with fewer pins and weaker power delivery (which adds complication to the consumer), there's a rather large gap in sensible products. The way I see it, assuming AMD doesn't release a Threadripper based on this socket, they should have TRs and low-end Epycs share the same smaller socket, where you pick the CPU that best fits your needs. Once you go below 32 cores, there's no point in having dual-socket options, 128 PCIe lanes, or 12 memory channels, so for people who don't need such power, motherboards could be made much cheaper.
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
schmidtbag:

AMD is reaching a point with these sockets where budget CPUs won't really make sense. I'm sure a 6000+ pin socket is expensive to manufacture, especially when you consider all the gold plating. Imagine having something like an 8-core CPU in one of those boards - seems like it'd be a real waste. The thing is, unless AMD releases AM5 Epycs or continues with LGA4094, there isn't really going to be an option for people who want a low-end server. I would argue servers with 8 cores would still be practical, if what you're looking for is the oodles of PCIe lanes. You don't need 6000 pins for that.
the more (cores) you buy,the more you save. if 8 cores is impractical,get 24. same as 1900x/1920x,they used to cost a lot and are worth literally peanuts now.Either get a proper hedt/server setup or just stick to mainstream.
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
cucaulay malkin:

the more (cores) you buy,the more you save. if 8 cores is impractical,get 24. same as 1900x/1920x,they used to cost a lot and are worth literally peanuts now.Either get a proper hedt/server setup or just stick to mainstream.
thank you for that. i'm a former hedt/threadripper guy loving 5950x yes i cut down on pcie lanes by going to 5950x but i didn't need them anymore with the improvement of gpus. i don't need (or want) multiple gpus. i am using all of my lanes with a m.2 card and on mobo m.2's. threadripper is an obvious and superior to Intel solution for cheap servers/ workstations and really that's over 80% of their market today. and the usage case scenario for 8 cores with over 24 lanes is marginal at best and easily addressed by letting Intel and the cheap Xeon's with old technology have them. then you can pay more for those 8 cores to have the extra lanes.
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
tunejunky:

thank you for that. i'm a former hedt/threadripper guy loving 5950x yes i cut down on pcie lanes by going to 5950x but i didn't need them anymore with the improvement of gpus. i don't need (or want) multiple gpus. i am using all of my lanes with a m.2 card and on mobo m.2's. threadripper is an obvious and superior to Intel solution for cheap servers/ workstations and really that's over 80% of their market today. and the usage case scenario for 8 cores with over 24 lanes is marginal at best and easily addressed by letting Intel and the cheap Xeon's with old technology have them. then you can pay more for those 8 cores to have the extra lanes.
it's true look at 1920x and 1950x launch price - $800 and $1000,20% price difference. new 1920x is 200usd here,about the same price I sold my 10500 for this year.a used 1950x still costs over 400.That's double the value.1800x launched at 500 dollars and a used one will cost as much as a new 1920x now. what I mean is buying hedt only pays off when you get the real deal. and am4 is incredibly powerful for a mainstream socket.it's so powerful it made a mid-x399 HEDT sku like 1920x look like sh** compared to 3950x/5950x.16 cores and pci-e 4.0 on mainstream in 2019 felt almost unreal.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
cucaulay malkin:

the more (cores) you buy,the more you save. if 8 cores is impractical,get 24. same as 1900x/1920x,they used to cost a lot and are worth literally peanuts now.Either get a proper hedt/server setup or just stick to mainstream.
I agree, but my point is the only reason why you're saving money is because the platform itself is so expensive, and with this new socket, it's about to get even higher. Whether you're on a budget, are primarily focused on GPU calculations, or are conscious about power consumption, it doesn't look like AMD will be offering anything that would be seen as a good value for such applications, relative to their more expensive options anyway. Think of it like getting a Raspberry Pi: "Oh, $35 for a PC sounds great!" But then you have to buy a uSD card. And then you have to buy a power brick. And then you might want to buy a chassis for it. And then you have to pay for shipping and tax (where applicable). Suddenly, it's not such a good value anymore, when you could have spent twice as much on a system with far more capability.
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
schmidtbag:

Whether you're on a budget, are primarily focused on GPU calculations, or are conscious about power consumption, it doesn't look like AMD will be offering anything that would be seen as a good value for such applications
of course they will their high end mainstream maybe people still can't understand how much they shook things up with am4 and 3950x. it's a workstation beast on mainstream platform. full am5 zen4 will make 3960x look silly. what I like about their approach is how they made trickle down work for regular consumers. it's basically all epyc design that later gets into threadripper and ryzen 9. same as nvidia did with volta onwards. smart. intel is years behind in their approach.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
cucaulay malkin:

of course they will their high end mainstream
If it's just you or a small studio, sure. But that's not an option for enterprise or small businesses, hence my point. Threadripper is basically high-end mainstream; there's a reason why CPUs like the 3990X don't encroach on Epyc sales.
what I like about their approach is how they made trickle down work for regular consumers. it's basically all epyc design that later gets into threadripper and ryzen 9. same as nvidia did with volta onwards. smart. intel is years behind in their approach.
I agree, though if TR gets this 6000 pin socket, the Epyc price increase is about to trickle down too.
https://forums.guru3d.com/data/avatars/m/282/282473.jpg
schmidtbag:

I agree, though if TR gets this 6000 pin socket, the Epyc price increase is about to trickle down too.
it did with trx40
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
schmidtbag:

If it's just you or a small studio, sure. But that's not an option for enterprise or small businesses, hence my point. Threadripper is basically high-end mainstream; there's a reason why CPUs like the 3990X don't encroach on Epyc sales. I agree, though if TR gets this 6000 pin socket, the Epyc price increase is about to trickle down too.
nope. threadripper is the de facto option for small enterprise, especially in the creative or technical fields. i do not know of any production company in film, video, or TV that does not use threadripper. even film studios use threadripper for dailies saving their Epyc servers for distribution, cloud and streaming services. i don't know of any ad agency that doesn't use TR. and don't even start on customers (most consultants) who compile code, you can have so many VM's running it's not funny.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
tunejunky:

nope. threadripper is the de facto option for small enterprise, especially in the creative or technical fields. i do not know of any production company in film, video, or TV that does not use threadripper. even film studios use threadripper for dailies saving their Epyc servers for distribution, cloud and streaming services. i don't know of any ad agency that doesn't use TR.
I'm going to for a moment ignore that your argument is personal anecdotes, because obviously there are exceptions. Are you talking about render farms or workstations? Because yeah, if you're actually editing videos or compiling code, TR is the way to go; that's what it's built for. If you're talking about servers and...: A. ... if there are a lot of servers (like multiple racks), either they're handling a very specific workload that doesn't benefit from any of Epyc's advantages (totally possible), or they're being irresponsible by ignoring why Epyc is different. B. ... if it's only one rack or just a couple towers, that only further emphasizes my point: even now, AMD doesn't really have any sensible entry-level server options. If you're on a tight budget, I could see people resorting to TR because it's the best bang-for-buck. That's not the norm though.
and don't even start on customers (most consultants) who compile code, you can have so many VM's running it's not funny.
That sounds like workstation to me. TRs with over 32 cores are a poor choice for VMs due to the significant RAM limitation. 32 and under, it's a great choice. EDIT: Also... none of these places use Intel? I know Intel isn't a great value for high-end stuff but still... none of them?
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
schmidtbag:

nobody needs 12 channels with 8 cores
you need twice as many channels to support ddr5's multichannel per dimm interface.
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
schmidtbag:

I'm going to for a moment ignore that your argument is personal anecdotes, because obviously there are exceptions. Are you talking about render farms or workstations? Because yeah, if you're actually editing videos or compiling code, TR is the way to go; that's what it's built for. If you're talking about servers and...: A. ... if there are a lot of servers (like multiple racks), either they're handling a very specific workload that doesn't benefit from any of Epyc's advantages (totally possible), or they're being irresponsible by ignoring why Epyc is different. B. ... if it's only one rack or just a couple towers, that only further emphasizes my point: even now, AMD doesn't really have any sensible entry-level server options. If you're on a tight budget, I could see people resorting to TR because it's the best bang-for-buck. That's not the norm though. That sounds like workstation to me. TRs with over 32 cores are a poor choice for VMs due to the significant RAM limitation. 32 and under, it's a great choice. EDIT: Also... none of these places use Intel? I know Intel isn't a great value for high-end stuff but still... none of them?
not in creative. time equals money the money spent upgrading to AMD is far less than the money saved. if you're doing a film/tv show/ animation the time encoding on Intel (at higher cost of ownership and operation) is days vs. hours on AMD which costs less and is more efficient. and you're talking enough workstations to make electricity a cost concern and heat an infrastructure issue that costs more too. i'm well familiar with one production company (skydance) in Santa Monica and the editing suite is a thing of (TR) wonder... then they have the animators, artists, set designers, etc... all running TR. only the studios have server farms (Epyc) for distribution, including streaming to theaters AND services Google is much the same, but with a greater number of legacy servers
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
tunejunky:

not in creative. time equals money the money spent upgrading to AMD is far less than the money saved. if you're doing a film/tv show/ animation the time encoding on Intel (at higher cost of ownership and operation) is days vs. hours on AMD which costs less and is more efficient. and you're talking enough workstations to make electricity a cost concern and heat an infrastructure issue that costs more too.
Yes I understand all that, but you're acting like these systems are upgraded to the latest and greatest whenever possible. Though I can see TR being fast and cost effective enough to encourage most studios to upgrade around the same time, you have to consider ROI. Some studios might have an older system that still hasn't yet paid itself off to warrant replacing yet. So it just seems pretty fishy that ALL studios have upgraded to TR.
i'm well familiar with one production company (skydance) in Santa Monica and the editing suite is a thing of (TR) wonder... then they have the animators, artists, set designers, etc... all running TR. only the studios have server farms (Epyc) for distribution, including streaming to theaters AND services Google is much the same, but with a greater number of legacy servers
What was the point of you "disagreeing" with me then? You basically just told me the workstations use TR (which is what it's supposed to be used for) and Epyc for servers. None of that contradicts anything I've been saying. It's like you're arguing for the sake of arguing.
data/avatar/default/avatar26.webp
schmidtbag:

AMD is reaching a point with these sockets where budget CPUs won't really make sense. I'm sure a 6000+ pin socket is expensive to manufacture, especially when you consider all the gold plating. Imagine having something like an 8-core CPU in one of those boards - seems like it'd be a real waste. The thing is, unless AMD releases AM5 Epycs or continues with LGA4094, there isn't really going to be an option for people who want a low-end server. I would argue servers with 8 cores would still be practical, if what you're looking for is the oodles of PCIe lanes. You don't need 6000 pins for that.
The real differentiator is the level of density you require. You don't need the sever platform for something to be a server. As a bare minimum you would probably want ECC, which AMD doesn't restrict it in their consumer CPUs. Even now Asrock Rack makes motherboards with server features for AM4 CPUs. You could take a future 16 core AM5 Zen 4 (32 core Zen 5) CPU and give it up to 256GB RAM and be good unless you need way more PCIe lanes.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
blkspade:

The real differentiator is the level of density you require. You don't need the sever platform for something to be a server. As a bare minimum you would probably want ECC, which AMD doesn't restrict it in their consumer CPUs. Even now Asrock Rack makes motherboards with server features for AM4 CPUs. You could take a future 16 core AM5 Zen 4 (32 core Zen 5) CPU and give it up to 256GB RAM and be good unless you need way more PCIe lanes.
I'm not sure you understood my point. The underlying issue here is that the added density for mainstream sockets makes the socket itself more expensive. Therefore, low-end systems that are just meant to be used for the average office or home PC become disproportionately more expensive. I don't think it's fair to make mainstream sockets more expensive just so there's a budget TR. As far as I'm concerned, 10c/20t really ought to be the limit for mainstream sockets. Of course for AMD, their chiplets come in 8 core clusters, so, I guess 16c/32t would be the limit for them. As I've said in another thread, CPU workloads are split between those with a finite thread count (of which, individual applications tend to go no higher than 12 threads), or there are workloads that are completely scalable and will work with as many cores as you can throw at it. The way I see it, the mainstream socket ought to cater only to the workloads with a finite thread count, so, that's where 10c/20t comes in. If you do a lot of work that can scale up, you should get a platform that can scale up well beyond 20 threads. This helps keep the market more clearly defined, and makes low-end parts cheaper (as they should be).
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
blkspade:

The real differentiator is the level of density you require. You don't need the sever platform for something to be a server. As a bare minimum you would probably want ECC, which AMD doesn't restrict it in their consumer CPUs. Even now Asrock Rack makes motherboards with server features for AM4 CPUs. You could take a future 16 core AM5 Zen 4 (32 core Zen 5) CPU and give it up to 256GB RAM and be good unless you need way more PCIe lanes.
i think you just hit the real differentiator and didn't (quite) realize it. PCIe lanes. it was PCIe lanes just as much as core count that were the reason behind my 1st and 2nd gen TR systems. i had run HEDT (Intel) for the same reasons up to that point. in the meantime GPUs vastly improved to the point of making X-fire and SLI a thing of the past in gaming (for most). there was virtually no point in running a card @ x4 for gaming as costs of GPUs increased as well. but now i'm doing like you suggested and running a 5950x instead of the TR - the feature set on TR forward from that point was geared for the professional market that AMD created from undercutting Xeon while outperforming them at the same time. nobody foresaw (even AMD) just how big (but more importantly influential) this market would be. if you're even a middling "content creator" TR pays for itself. but imho i agree with schmidtbag on core counts