Review: Intel Core i9-14900KS processor

Processors 199 Page 1 of 1 Published by

Click here to post a comment for Review: Intel Core i9-14900KS processor on our message forum
https://forums.guru3d.com/data/avatars/m/284/284177.jpg
Stock 7800X3D laughing his ass off!... Get Some! Agent-A01 you got to give AMD credit for that X3D gaming processor. This is a Gaming website. Give me a 7800X3D + RTX 4090 and I will show you the fastest gaming computer on Earth. Stock Clocks.
data/avatar/default/avatar10.webp
Airbud:

Stock 7800X3D laughing his ass off!... Get Some! Agent-A01 you got to give AMD credit for that X3D gaming processor. This is a Gaming website. Give me a 7800X3D + RTX 4090 and I will show you the fastest gaming computer on Earth. Stock Clocks.
Strange my 7800x3d with tweaked 6200c28 isn't faster than my 14900ks with 8600c38 ddr5 in the games I play 😛 But who cares, both are fast as f... 😀 Min fps are higher on my Intel pc, but often max fps is higher on the AMD pc. I prefer Intel for the main gamer.
https://forums.guru3d.com/data/avatars/m/284/284177.jpg
nizzen:

Strange my 7800x3d with tweaked 6200c28 isn't faster than my 14900ks with 8600c38 ddr5 in the games I play 😛 But who cares, both are fast as f... 😀 Min fps are higher on my Intel pc, but often max fps is higher on the AMD pc. I prefer Intel for the main gamer.
You had to try and do that though!....Fanboy? clock clock here, clock clock there, just to barely tie a stock processor!...Lol X3D....Nuff Said.
data/avatar/default/avatar03.webp
Airbud:

You had to try and do that though!....Fanboy? clock clock here, clock clock there, just to barely tie a stock processor!...Lol X3D....Nuff Said.
You sound like a one sided blind fanboy... I'm a performance fanboy 😉
data/avatar/default/avatar38.webp
pegasus1:

$100 for an extra 200mhz,
100$ for an extra 200Mhz over 6000Mhz... how much is that 3% just on the boost situation? all the other time is stuck at the exact same frequency
https://forums.guru3d.com/data/avatars/m/16/16662.jpg
Administrator
Agent-A01:

You'll just have to accept the fact that there are people out there that have more experience and/or knowledge than tech reviewers when it comes to things like memory overclocking. Buildzoid included, who isn't that experienced with DDR5 OCing. It's not Hilbert's job to know how to OC memory. Nor does he need to. There are QVL kits that just work that are much faster than what the review used though.
You are absolutely right there, also nitty gritty memory overclocking takes up one workday easily to find that sweet spot, I would not even have the patience for it. But at such frequencies (8200 MHz) you need to be an expert. In that context and my frame of expertise, I do want to note that the 8200 MHz DIMMs at the time refused to operate at that XMP due to the lack of highly binned memory controllers. You need a very special binned CPU to be able to work stable at that frequency. We confirmed this with MSI R&D and TeamGroup at the time. Their experience was the same, one lucky sample CPUs where able to reach that value.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Agent-A01:

Words like "I doubt" and "won't matter" are followed by opinions. If it didn't really matter then there would be no market for products like that.
That's because I'm speaking on something that hasn't been tested and where I can't test it myself. Evidence (from other OSes) suggests that the Windows scheduler causes games to run a lot slower than they ought to. From what I recall in Linux, the results are kind of swapped, where productivity workloads tend to benefit from the V-cache but gaming doesn't as much (at least less than it does in Windows).
So to iterate. AMD implemented V-cache specifically to accelerate gaming performance. That is exactly what they say it is for in their first sentence of their page of info. The additional benefit is it also helps other memory sensitive workloads.
Yes, because in Windows, that's what it benefits most. Most other workloads either utilize all cores (where the scheduler isn't really going to compromise performance much), aren't all that memory intensive, or run in short enough bursts to not swap cores. You'll find a lot of the X3D chips don't run faster than their non-X3D counterparts in such tasks.
Inter-core communication issue is a direct result of the CCD design of Zen chips.
It's not limited to just that, but that's where it's the 2nd most severe (swapping between NUMA nodes is the greatest problem).
It can help a lot, depending on the game. And it can do nothing at all in games or workloads where they aren't memory bound.
Yes, generally. Though, it doesn't have to be all that memory intensive for the V-cache to help out. All it takes is needing far more instructions than what will fit in the L1 and L2 caches (or L3 if you're talking a separate CCD).
I would hardly consider a 50% boost of a 7800X3D(a single 8 core CCD) over the non X3D single 8 core part insignificant or imperceptible.
Source? I'm not aware of the 7800X3D getting that huge of a boost over the 7700X (the closest non-X3D equivalent I can think of) in games.
Regarding Intel, memory speed is much more important. Benchmarks that use 6000 for both AMD V-Cache chips vs Intel i9 like the review here provide irrelevant data for people who are interested in realistic comparisons.
Agreed, though that's kind of a given. If you compare a 14900K to a 7950X (3D or not, doesn't matter), the Intel chip operates at a significantly higher frequency with more physical cores and a smaller L3 cache. All of that means it needs faster RAM. The faster the cores can compute, the more bandwidth they need to feed their caches, especially if the task is memory-intensive. If the cache isn't big enough, it will need to swap out data more often, which needs more bandwidth from RAM. For each additional core used, you're splitting the bandwidth between them. Meanwhile, when the Windows scheduler decides to unnecessarily swap a thread to a different (likely idle) core, more bandwidth is needed to flush and rewrite the entire L1 and L2 caches for the new core, rather than just some instructions. Since the L3 isn't that large for so many cores, that could be a significant blow to performance.
In this case, the gap of performance between it and the X3D chip is much less with faster memory.
I'm curious what game you're referring to. Getting such significant performance increases at frame rates that low from higher DRAM clock speeds is what you'd typically find when using an iGPU. But otherwise, I'm aware the 14900K can narrow the gap against an X3D chip with faster memory. My point was really that the only time it makes a significant enough difference is when we're talking a few hundred FPS, at which point it doesn't matter anymore. That's why I was saying an overclocked 14600K or stock 5800X3D is plenty good enough for anyone for today's games. Sure, you could probably find exceptions, but most people need to draw the line somewhere - not everyone has an endless budget just to accommodate a couple games they might only play for a year.
So if a 7800X3D with using a 14000 MT memory kit existed than the cache really wouldn't make any difference. But since it doesn't, AMD needed to employ V-Cache to increase performance.
Agreed; I never argued against that.
https://forums.guru3d.com/data/avatars/m/231/231931.jpg
Airbud:

Stock 7800X3D laughing his ass off!... Get Some! Agent-A01 you got to give AMD credit for that X3D gaming processor. This is a Gaming website. Give me a 7800X3D + RTX 4090 and I will show you the fastest gaming computer on Earth. Stock Clocks.
I have a 7800X3D in the household and I've built a few PCs for buddies with them too. A tweaked i9 is faster when paired with fast memory. But the 7800X3D is great for people that only game and don't want to spend more money, time, or effort chasing the last few percentage points.
Hilbert Hagedoorn:

You are absolutely right there, also nitty gritty memory overclocking takes up one workday easily to find that sweet spot, I would not even have the patience for it. But at such frequencies (8200 MHz) you need to be an expert. In that context and my frame of expertise, I do want to note that the 8200 MHz DIMMs at the time refused to operate at that XMP due to the lack of highly binned memory controllers. You need a very special binned CPU to be able to work stable at that frequency. We confirmed this with MSI R&D and TeamGroup at the time. Their experience was the same, one lucky sample CPUs where able to reach that value.
I am not surprised as the ACE motherboard is a 4DIMM board. Trying to get 8200 to work on that board would be a hair-pulling experience even for experts. Where you may have only been able to achieve 8000 or 7800 on that board, the same setup would work without issue on a 2dimm board like the ASUS Z790 APEX. Any top end 4 DIMM slot boards achieving 8200 would end up requiring a very strong IMC and also require a win with the motherboard lottery(yes there is variance board to board). Users with a 4 DIMM slot board will have to accept that the maximum frequency supported may not be achievable period, despite being on the QVL. Realistic speed for those boards is 7800-8000 maximum for most users. TLDR. For anyone trying to get high memory speeds, the recommendation is to stick to 2 DIMM slot boards as it's always significantly easier to achieve and more stable. Anyways, while we are on the topic. Is there any particular reason you haven't switched to 7200 XMP kits(32GB/48GB)? 7200 is only marginally more expensive nowadays and any Z790 board and 13th/14th gen CPU will support it without issue. The current 64GB DR memory kit isn't realistic buy for most users and it can also impose performance penalties and stability; it's not recommended for Intel or AMD, but with Intel it can impact performance more so due to lacking extra cache. At least with AMD(V-Cache) chips you can pair them with 5200 Jedec speeds and it only impacts performance a tiny amount whereas Intel relies on faster memory for performance. And this isn't something that only you do; many reviews use the same 6000 XMP setups and don't understand the issue. My guess is their thought process is that RAM should be the same across all test platforms to make it an "even playing field" but that doesn't make sense as AMD can support only 6000/6200 maximum whereas Intel >8000. It's like comparing two cars top speeds. Car 1 has tires rated to 181MPH and Car 2 has 165MPH rated tires. To make it fair, the reviewer puts the same 165MPH tires on both cars to make it "fair" despite it being able to go faster than 165. That's an extremely basic analogy though. So it makes sense to use 6000 XMP for AMD test setups(not 64GB though, stick to single rank) and 7200 for Intel platforms(minimal additional cost) I suppose a middle-ground would be to include 6000 and 7200 results when benchmarking CPUs but I think that's more work than necessary especially when you are time-constrained.
https://forums.guru3d.com/data/avatars/m/231/231931.jpg
schmidtbag:

That's because I'm speaking on something that hasn't been tested and where I can't test it myself. Evidence (from other OSes) suggests that the Windows scheduler causes games to run a lot slower than they ought to. From what I recall in Linux, the results are kind of swapped, where productivity workloads tend to benefit from the V-cache but gaming doesn't as much (at least less than it does in Windows).
Of course it has been tested. You just don't know about it. While windows scheduler can hurt performance, CPUs that build built-in thread management can mitigate it. That's what Intel's thread director is for. Intel APO is also used for games specifically. AMD uses similar tech with the V-cache series; their issue is that it sucks to be blunt(bad software) and can make 7950X3D worse than a 7800X3D due to CCD design.
schmidtbag:

Yes, because in Windows, that's what it benefits most. Most other workloads either utilize all cores (where the scheduler isn't really going to compromise performance much), aren't all that memory intensive, or run in short enough bursts to not swap cores. You'll find a lot of the X3D chips don't run faster than their non-X3D counterparts in such tasks.
Memory bound(this includes all hierarchies of cache) applications aren't tied to an OS. It just happens that those situations happen a lot in gaming scenarios. Of course V-cache won't help when things already fit within a normal CPUs cache and don't need to be swapped.
schmidtbag:

It's not limited to just that, but that's where it's the 2nd most severe (swapping between NUMA nodes is the greatest problem).
There's always going to be a penalty when something has to swap to system memory. If the additional V-cache makes it to where that operation doesn't have to happen then it will improve performance. The other way to mitigate that is to make system memory faster. You're explaining a phenomenon that impacts ALL setups, obviously. But specifically in respects to Zen CPUs, those that have dual CCDs are penalized much more so because of the additional latency. The 3D cache does not help that specific issue.
schmidtbag:

Yes, generally. Though, it doesn't have to be all that memory intensive for the V-cache to help out. All it takes is needing far more instructions than what will fit in the L1 and L2 caches (or L3 if you're talking a separate CCD).
See above. Memory bound includes L1/L2/L3 cache. If it doesn't fit in those and swaps to system memory it is a memory bound situation.
schmidtbag:

Source? I'm not aware of the 7800X3D getting that huge of a boost over the 7700X (the closest non-X3D equivalent I can think of) in games.
I understand that you have little knowledge with gaming or testing hardware but you could have spent 1 minute and found something on youtube. [youtube=Gu12QOQiUUI] I and many others have also seen great results from our own testing.
schmidtbag:

Agreed, though that's kind of a given. If you compare a 14900K to a 7950X (3D or not, doesn't matter), the Intel chip operates at a significantly higher frequency with more physical cores and a smaller L3 cache. All of that means it needs faster RAM. The faster the cores can compute, the more bandwidth they need to feed their caches, especially if the task is memory-intensive. If the cache isn't big enough, it will need to swap out data more often, which needs more bandwidth from RAM. For each additional core used, you're splitting the bandwidth between them. Meanwhile, when the Windows scheduler decides to unnecessarily swap a thread to a different (likely idle) core, more bandwidth is needed to flush and rewrite the entire L1 and L2 caches for the new core, rather than just some instructions. Since the L3 isn't that large for so many cores, that could be a significant blow to performance.
Yes. That is why with V-cache CPUs RAM speed is less important than Intel. That is why I think any review should include at least decent 7200 XMP memory instead of the slow gen 1 6000 XMP kits.
schmidtbag:

I'm curious what game you're referring to. Getting such significant performance increases at frame rates that low from higher DRAM clock speeds is what you'd typically find when using an iGPU. But otherwise, I'm aware the 14900K can narrow the gap against an X3D chip with faster memory. My point was really that the only time it makes a significant enough difference is when we're talking a few hundred FPS, at which point it doesn't matter anymore. That's why I was saying an overclocked 14600K or stock 5800X3D is plenty good enough for anyone for today's games. Sure, you could probably find exceptions, but most people need to draw the line somewhere - not everyone has an endless budget just to accommodate a couple games they might only play for a year.
There are tons of games where there are huge gains. But ot all reviews you see even care about memory as many are GPU limited. iGPUs are extremely slow. Take the fastest iGPU and it's only getting 20 FPS in the latest titles. Intel iGPU on a 6000 XMP kit vs 8200 isn't going to make it magically much faster. Core/shader performance is extremely lacking and is the bottleneck. Maybe there is a time where it mattered some but there is no iGPU relevant today where it would matter. A 14900K doesn't make sense for most users. But the point is there are people that want the best. That's why I think when a tech reviewer website post's a review titled " 7800X3D = the fastest gaming setup that exists" and then tests both Intel and AMD with 5600 or 6000 XMP is pretty much a lie to consumers and spreading misinformation as us "power users" already know what's faster from personal testing. But when it comes down to budget or what makes sense financially well that's an entirely different subject.
data/avatar/default/avatar35.webp
Agent-A01:

Anyways, while we are on the topic. Is there any particular reason you haven't switched to 7200 XMP kits(32GB/48GB)? 7200 is only marginally more expensive nowadays and any Z790 board and 13th/14th gen CPU will support it without issue
It does not matter what Hilbert does: someone will just say why 7200 when I can run 8000? Someone else will complain about different memory setup Intel vs AMD. Someone else will complain that a 2 X 32GB kit does not exist at faster speed then 6800, so why 7200. ( I am guilty of that one) Someone else is going to complain about using 1080p on 3000€ hardware just to force a difference in memory speed to show up. Someone is also going to claim that 600 FPS vs 500 FPS in rainbow six siege is going to make a huge difference with a 144-240 HZ monitor. Someone is going to complain about bios power limits on/off. The Guru3D Teamgroup memory review shows minimal performance difference in synthetic benchmarks between 6000 and 8000 speeds. https://www.guru3d.com/review/tforce-xtreem-48gb-ddr5-8200-mhz-cl38-review/ There is no "winning" when writing reviews, someone is always going to have a problem with the review.
https://forums.guru3d.com/data/avatars/m/231/231931.jpg
TLD LARS:

It does not matter what Hilbert does: someone will just say why 7200 when I can run 8000? Someone else will complain about different memory setup Intel vs AMD. Someone else will complain that a 2 X 32GB kit does not exist at faster speed then 6800, so why 7200. ( I am guilty of that one) Someone else is going to complain about using 1080p on 3000€ hardware just to force a difference in memory speed to show up. Someone is also going to claim that 600 FPS vs 500 FPS in rainbow six siege is going to make a huge difference with a 144-240 HZ monitor. Someone is going to complain about bios power limits on/off.
I think you are vastly overthinking this. It's very simple. 1. 7200 will work for 99% of users with minimal cost increase. 8000 will not. 2. AMD has a vastly different memory controller that's limited to 6000-6200~. Intel is not limited in that way. That's forcing a hard handicap on one platform because the other cannot handle higher speeds. Informed buyers aren't going to buy 6000 XMP kits so the testing of it makes it irrelevant outside of seeing the difference between it and faster speeds. 7200 makes sense because it will 'just work' for most users with only a small cost over 6000. The rest of your points are irrelevant to the discussion. Why mention 2x32GB kit? Most users only need 16-32GB max.
TLD LARS:

The Guru3D Teamgroup memory review shows minimal performance difference in synthetic benchmarks between 6000 and 8000 speeds. https://www.guru3d.com/review/tforce-xtreem-48gb-ddr5-8200-mhz-cl38-review/
That review has nothing to do with games so why bring it up?
TLD LARS:

There is no "winning" when writing reviews, someone is always going to have a problem with the review.
What happens when intel 15th gen comes out and they support 10000MT/s out of the box and the next generation Zen chips are still stuck on 6000~. Are we still going to stick to 6000 because a few people are complaining about them being different? Let them complain. You can't please everyone.
https://forums.guru3d.com/data/avatars/m/54/54063.jpg
From personally owning both Intel and AMD chipsets plus upgrading my RAM from 6400mhz to 7800mhz, I understand what @Agent-A01 and @Carfax are explaining. Agent-A01, Carfax, Chispy and Nizzen has presented factual information that I verified with my own Intel setup. To my understanding, this conversation is about adjusted memory timings that can provide better performance from an Intel processor with faster RAM? This is indeed the truth. When I first made the jump from 6400 to 7800, I didn't notice anything due to my inexperience and the usual task I do on my computers. I noticed the change when working with extremely large video files and encoding. The extra bandwidth did play a role with faster completion times. I'm in no way near as knowledgeable about memory timings or other assets of the PC as the Gurus mentioned above. I would always face blue screens, lots of config time or warm temps that would cause me to call it quits. I finally accomplished some stable alterations from reading Chispy's post on my previous kit of GSkill Trident Z5 DDR5 6400mhz. I was able to enable my XMP profile and this was also with 4 sticks at 16gb a piece. Here are some benchmarks from that config. The number to really notice is the CPU score of 23,705 on my 13900K. It seems I have a good silicon lottery piece with a nice controller. I'm sure in more experienced hands this could've been pushed even further. I think I did ok. 😉 Timespy - Graphics Score 39,286 - Overall Score 35,760 https://www.3dmark.com/3dm/90823786? Port Royal - 28,280 https://www.3dmark.com/3dm/92746059? Speedway - 10,987 https://www.3dmark.com/sw/495147 Popped 11,080 in Speedway but the sysinfo config file was giving me an issue. https://www.3dmark.com/sw/946792 The idea of the 14900KS grabbing the crown wouldn't surprise me at all. I wasn't shocked about the 7950X3D or the 7800X3D either. I've never seen a BS post from @Agent-A01. He has proven to be extremely knowledgeable in vast subjects of PC hardware. Everything he post in this thread are all facts that can easily be researched. No emotions or opinions. The best teacher is on hand experience with hardware. Why are so many of you arguing against him? He didn't say anything wrong or false. @Carfax or @nizzen didn't either. Guru3D, the house that Hilbert built is where the big boys play. I personally have seen all the Tech Tubers from Gamers Nexus, Digital Foundry, Jayz2cents, Linus and others mention Guru3D as the place of hardcore PC enthusiast. We have some of the most educated and knowledgeable people communicating with us. Instead of fighting them, why not embrace them?
data/avatar/default/avatar23.webp
Agent-A01:

2. AMD has a vastly different memory controller that's limited to 6000-6200~. Intel is not limited in that way. That's forcing a hard handicap on one platform because the other cannot handle higher speeds. Informed buyers aren't going to buy 6000 XMP kits so the testing of it makes it irrelevant outside of seeing the difference between it and faster speeds.
No it is not forcing a handicap. The more correct statement is: the Intel architecture is more dependent on faster memory then AMD is. A couple of years back it was the opposite with the Ryzen 1800, the 7700k was not as memory dependent as the Ryzen was.
Agent-A01:

7200 makes sense because it will 'just work' for most users with only a small cost over 6000.
That depends on the memory kit size needed, if you want a 16GB kit then yes, but 32-48GB kits the price difference is enough to buy a good mouse, keyboard, 1TB SSD or a 64GB 6000 kit instead.
Agent-A01:

The rest of your points are irrelevant to the discussion. Why mention 2x32GB kit? Most users only need 16-32GB max.
Because I already ran out of system memory with my 16GB, 5800x, 6900XT, 1440P setup. I had to close the browser to free up 1GB of system memory to be able to play games more fluently and without texture pop in. Now I am using 20GB of system memory consistently, so a 32GB kit would need to be upgraded if I bought a 4090 equivalent GPU and begun running 4k and RT instead of what the 6900XT is capable of. Because the fact that 7200 speeds on 4 sticks is pretty much impossible I would need to throw out the 32GB memory kit when upgrading to a 4090 equivalent.
Agent-A01:

That review has nothing to do with games so why bring it up?
I bring it up specifically because it has nothing to do with games so therefore the memory speed bonus should be more obvious in these cases because there is no potential GPU limit to hold back the CPU.
Agent-A01:

What happens when intel 15th gen comes out and they support 10000MT/s out of the box and the next generation Zen chips are still stuck on 6000~. Are we still going to stick to 6000 because a few people are complaining about them being different?
I would call that situation highly unlikely when the 8700g shows good memory speeds already and a "15900" or what ever it would be called is not going to jump 2500 in memory speed suddenly, it is more likely that a "15900" would have a architecture change to more cache or other design change to be less dependent on memory speed then 12th-13th-14th gen CPUs. Changing to be less memory dependent will help Intel much more then needing to go for 10000 memory speeds to be competitive. The things I did not quote, I did so on purpose, because I do not want to comment on them, it has not been a success in the past.
https://forums.guru3d.com/data/avatars/m/231/231931.jpg
TLD LARS:

No it is not forcing a handicap. The more correct statement is: the Intel architecture is more dependent on faster memory then AMD is. A couple of years back it was the opposite with the Ryzen 1800, the 7700k was not as memory dependent as the Ryzen was.
It is independent of architecture. Non 3D parts are memory bound too which is why they can be 50% slower in titles.
TLD LARS:

That depends on the memory kit size needed, if you want a 16GB kit then yes, but 32-48GB kits the price difference is enough to buy a good mouse, keyboard, 1TB SSD or a 64GB 6000 kit instead.
32GB 7200 kits are only around 110-130 USD depending on the brand. 6000 XMP kits are 100+ USD. Good luck buying 1 TB SSD with $10 savings. 64GB is over $200 and is a waste of money for most people.
TLD LARS:

Because I already ran out of system memory with my 16GB, 5800x, 6900XT, 1440P setup. I had to close the browser to free up 1GB of system memory to be able to play games more fluently and without texture pop in. Now I am using 20GB of system memory consistently, so a 32GB kit would need to be upgraded if I bought a 4090 equivalent GPU and begun running 4k and RT instead of what the 6900XT is capable of. Because the fact that 7200 speeds on 4 sticks is pretty much impossible I would need to throw out the 32GB memory kit when upgrading to a 4090 equivalent.
What difference does it make on what you experienced. 99% of people will be good for years with 32GB. Most are still OK with 16GB. 64GB is a detriment to performance and most people's wallet. Besides that, 4 sticks or DR kits are not recommended by Intel nor AMD above jedec speeds(5200). And no you wouldn't have to get rid of a 32GB kit with a 4090. I own a 4090 and a 4K 240Hz oled using RT and I've not once run into a situation where it wasn't enough. I own a 48GB kit because I like to play around with hardware but not once has the increased capacity been beneficial.
TLD LARS:

I bring it up specifically because it has nothing to do with games so therefore the memory speed bonus should be more obvious in these cases because there is no potential GPU limit to hold back the CPU.
A fully synthetic benchmark that tests the CPU only and not the memory subsystem isn't going to show any increase. Unrealistic tests that don't matter to most people; the point is that games are relevant and what you posted isn't. In realistic scenarios, even a 4090 isn't fully utilized is many games. That's why memory speed matters.
TLD LARS:

I would call that situation highly unlikely when the 8700g shows good memory speeds already and a "15900" or what ever it would be called is not going to jump 2500 in memory speed suddenly, it is more likely that a "15900" would have a architecture change to more cache or other design change to be less dependent on memory speed then 12th-13th-14th gen CPUs. Changing to be less memory dependent will help Intel much more then needing to go for 10000 memory speeds to be competitive.
The next gen Ryzen coming out will have zero increase in supported memory speeds because it's the same architecture with no changes to the IMC. Golden chips are already at the brink of 9000 so it would not surprise me to see the next generation of Intel supports 10000 with an OC.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Agent-A01:

Of course it has been tested. You just don't know about it.
Reading back, it seems I wasn't being very clear at all about kind of test I was referring to. So, the misunderstanding is all on me. Anyway, the kind of tests I was referring to would be something like a game whose threads are locked to their designated cores (on Windows), when comparing to either a 5700X vs 5800X3D or an Intel chip with 6000MT/s vs 8400MT/s.
While windows scheduler can hurt performance, CPUs that build built-in thread management can mitigate it. That's what Intel's thread director is for. Intel APO is also used for games specifically. AMD uses similar tech with the V-cache series; their issue is that it sucks to be blunt(bad software) and can make 7950X3D worse than a 7800X3D due to CCD design.
Mitigate is an appropriate word - the solutions you speak of do help somewhat. I would argue not enough.
Memory bound(this includes all hierarchies of cache) applications aren't tied to an OS. It just happens that those situations happen a lot in gaming scenarios. Of course V-cache won't help when things already fit within a normal CPUs cache and don't need to be swapped.
Right, they're not bound by an OS, but the OS's scheduler can have an impact on memory-bound applications. My point about the V-cache was that even without it, AMD has more cache. So, since Intel has less of it to feed more cores, the cache is going to have to be cleared and and re-written more often, thereby demanding more memory bandwidth.
There's always going to be a penalty when something has to swap to system memory. If the additional V-cache makes it to where that operation doesn't have to happen then it will improve performance. The other way to mitigate that is to make system memory faster.
100% agree, which is also why I was saying that the V-cache helps mitigate the problem mentioned above, since the cores that are being swapped don't have to feed everything from RAM.
You're explaining a phenomenon that impacts ALL setups, obviously. But specifically in respects to Zen CPUs, those that have dual CCDs are penalized much more so because of the additional latency. The 3D cache does not help that specific issue.
Agreed; I never said otherwise.
See above. Memory bound includes L1/L2/L3 cache. If it doesn't fit in those and swaps to system memory it is a memory bound situation.
Agreed; I never said otherwise.
I understand that you have little knowledge with gaming or testing hardware but you could have spent 1 minute and found something on youtube.
Skimming through each test in that video, Assetto Corsa seemed to be the only one with such an extreme difference. Most were below a 30% improvement. Maybe there was another test in there where the 1% lows were greatly improved that I missed. In any case, your comment is uncalled for. It's not hard to cherry-pick results. There's no doubt that V-cache and faster memory make a significant increase in performance but to act like 50% is expected is wrong, according to your own source.
That is why I think any review should include at least decent 7200 XMP memory instead of the slow gen 1 6000 XMP kits.
A review should include both, because why would you pair 7200 with a X3D chip when it isn't really necessary?
There are tons of games where there are huge gains. But ot all reviews you see even care about memory as many are GPU limited.
Games that are CPU limited tend to yield more FPS than we can differentiate. So ultimately, CPU doesn't really matter that much anymore in games.
iGPUs are extremely slow. Take the fastest iGPU and it's only getting 20 FPS in the latest titles. Intel iGPU on a 6000 XMP kit vs 8200 isn't going to make it magically much faster. Core/shader performance is extremely lacking and is the bottleneck. Maybe there is a time where it mattered some but there is no iGPU relevant today where it would matter.
Yes... I know... my point is that iGPUs are the only kind of chip that basically see a linear performance gain no matter how much bandwidth you throw at them, because they're really starving that much for bandwidth. CPUs tend to have an upper limit, hence why you wouldn't need to throw 7200MT/s at a X3D chip.
A 14900K doesn't make sense for most users. But the point is there are people that want the best. That's why I think when a tech reviewer website post's a review titled " 7800X3D = the fastest gaming setup that exists" and then tests both Intel and AMD with 5600 or 6000 XMP is pretty much a lie to consumers and spreading misinformation as us "power users" already know what's faster from personal testing.
Fair enough; I can't argue with that.
https://forums.guru3d.com/data/avatars/m/242/242443.jpg
nizzen:

Strange my 7800x3d with tweaked 6200c28 isn't faster than my 14900ks with 8600c38 ddr5 in the games I play 😛 But who cares, both are fast as f... 😀 Min fps are higher on my Intel pc, but often max fps is higher on the AMD pc. I prefer Intel for the main gamer.
Nice .....
https://forums.guru3d.com/data/avatars/m/108/108389.jpg
7800X3D are for the top 5% of PC users and 14900KS are for top 0.1% of PC users 😛. 95% of PC users are more than fine with r5 7600 or 13600/14600. Once RTX 5090 arrives we will need faster CPUs than 7800X3D/14900KS 😎
https://forums.guru3d.com/data/avatars/m/266/266726.jpg
Agent-A01:

The next gen Ryzen coming out will have zero increase in supported memory speeds because it's the same architecture with no changes to the IMC. Golden chips are already at the brink of 9000 so it would not surprise me to see the next generation of Intel supports 10000 with an OC.
eh the new 8000 series ryzen apus are running 10000mt/s pretty easily, also worth mentioning that 8000mt/s is viable on 7000 series cpus with 1:2 mode, can be faster than 1:1 mode at 6000mt/s , you dont see alot of setups like that mainly because only a few boards can run those speeds, and the timings have to be tight to overcome the penalty of the slower memory controller speed. higher speeds are likely possible on the current zen 4 imcs ,its mainly the lack of memory multipliers above 8000 and good boards that prohibit it, which may be solved with an agesa update, as has been the case in the past. People seem to think that the 1:2 mode is pointless but its really not, even on the single ccd chips the write speeds improve, power consumption is lower , and the latency goes down once you get past the dead fish zone. bonus hwbot submission of a dude running a stable 8000mt/s with 51.3ns in aida. 106gb/s write, may not be as good as an alderlake /raptorlake but it does show there is headroom. https://hwbot.org/submission/5433366_domdtxdissar_aida64___memory_read_ddr5_sdram_109147_points and a 10600mt/s on an 8500g with a modest 1dpc board https://hwbot.org/submission/5490274_areng.valueoc_memory_frequency_ddr5_sdram_5303.4_mhz i think i saw a post on overclock.net of this same guy running the igpu with 9000mt/s cl38 lol. The next generation of motherboards and the presence of a 5nm io die ,will likely be the deciding factor as to whether or not ryzen supports higher speeds.
https://forums.guru3d.com/data/avatars/m/231/231931.jpg
user1:

eh the new 8000 series ryzen apus are running 10000mt/s pretty easily, also worth mentioning that 8000mt/s is viable on 7000 series cpus with 1:2 mode, can be faster than 1:1 mode at 6000mt/s , you dont see alot of setups like that mainly because only a few boards can run those speeds, and the timings have to be tight to overcome the penalty of the slower memory controller speed. higher speeds are likely possible on the current zen 4 imcs ,its mainly the lack of memory multipliers above 8000 and good boards that prohibit it, which may be solved with an agesa update, as has been the case in the past. People seem to think that the 1:2 mode is pointless but its really not, even on the single ccd chips the write speeds improve, power consumption is lower , and the latency goes down once you get past the dead fish zone. bonus hwbot submission of a dude running a stable 8000mt/s with 51.3ns in aida. 106gb/s write, may not be as good as an alderlake /raptorlake but it does show there is headroom. https://hwbot.org/submission/5433366_domdtxdissar_aida64___memory_read_ddr5_sdram_109147_points and a 10600mt/s on an 8500g with a modest 1dpc board https://hwbot.org/submission/5490274_areng.valueoc_memory_frequency_ddr5_sdram_5303.4_mhz i think i saw a post on overclock.net of this same guy running the igpu with 9000mt/s cl38 lol. The next generation of motherboards and the presence of a 5nm io die ,will likely be the deciding factor as to whether or not ryzen supports higher speeds.
Running 1:2 mode is pointless for single CCD chips because FCLK bandwidth is less than memory bandwidth. It makes zero sense to sacrifice latency for more bandwidth. For dual CCDs there are two FCLK links which means there is more bandwidth. But it's still pointless in games because they will be scheduled to a single CCD. There are some benefits for productivity based workloads but under games there is none. V-cache will largely make the modes irrelevant under gaming unless it runs out and in that case 1:1 mode is always going to be better performant than 1:2 mode.
data/avatar/default/avatar31.webp
Agent-A01:

The rest of your points are irrelevant to the discussion. Why mention 2x32GB kit? Most users only need 16-32GB max.
TLD LARS:

Because I already ran out of system memory with my 16GB, 5800x, 6900XT, 1440P setup. I had to close the browser to free up 1GB of system memory to be able to play games more fluently and without texture pop in. Now I am using 20GB of system memory consistently, so a 32GB kit would need to be upgraded if I bought a 4090 equivalent GPU and begun running 4k and RT instead of what the 6900XT is capable of. Because the fact that 7200 speeds on 4 sticks is pretty much impossible I would need to throw out the 32GB memory kit when upgrading to a 4090 equivalent.
Agent-A01:

What difference does it make on what you experienced. 99% of people will be good for years with 32GB. Most are still OK with 16GB. 64GB is a detriment to performance and most people's wallet. Besides that, 4 sticks or DR kits are not recommended by Intel nor AMD above jedec speeds(5200). And no you wouldn't have to get rid of a 32GB kit with a 4090. I own a 4090 and a 4K 240Hz oled using RT and I've not once run into a situation where it wasn't enough. I own a 48GB kit because I like to play around with hardware but not once has the increased capacity been beneficial.
This feels a bit like: "Your findings are irrelevant because my findings show that you are using your pc wrong". I hit 28GB usage yesterday in Darktide and From the Debths 25GB in satisfactory. 24GB in Cities skyline 2