Core i9-13900K Early Review Indicates Significant Improvements in Comparison to the Core i9-12900K

Published by

Click here to post a comment for Core i9-13900K Early Review Indicates Significant Improvements in Comparison to the Core i9-12900K on our message forum
https://forums.guru3d.com/data/avatars/m/283/283018.jpg
Glottiz:

So is it a good thing or a bad thing? Skipping 4 gens sounds good to me, you save a ton of money.
Good thing or a bad thing? Skipping several generations is unquestionably (and remains) a far better tech-investment and saving lots of money. The Bad: Given our ongoing inflationary times and economic doldrums, means that AMD fans essentially today need to purchase an entire new system including a new MB, DDR5 memory and possibly a more powerful PSU just to get started. This said, I will like the many be waiting until early 2024 to even entertaining any sort of new build. Clearly at that time much more 'tech-bang' for the buck and just perhaps better job and less economic headwinds ahead! My company announced last week that there will be no year-end raises and that we should be happy in having a job in the first place. I already knew then that for the next 16-months or so I would only be able to very selectively upgrade my already hobbled together PC across the folding tables of the local computer show. No sales tax required and another 5% discount when paying with cash. The Good News: As always plenty of older generation hardware stacked high on the tables. Greetings from the man on the street in Stehekin, WA!
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
nizzen:

In 2022, we are rendering with GPU's 😉
Again... not referring to gaming here. Many professional renderers and video editors are still heavily CPU bound. Even the ones that use GPUs still use a lot of CPU. Kind of ironic because CB, the supposedly "one and only multithreaded task", is basically just a demo of Cinema 4D, which is a real-world application.
data/avatar/default/avatar33.webp
schmidtbag:

Again... not referring to gaming here. Many professional renderers and video editors are still heavily CPU bound. Kind of ironic because CB, the supposedly "one and only multithreaded task", is basically just a demo of Cinema 4D, which is a real-world application.
....professional renderers and video editors .... They don't buy mainstream hardware 😛 They buy Threadripper IF they are that cpubound. Most likely they aren't.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
nizzen:

....professional renderers and video editors .... They don't buy mainstream hardware 😛 They buy Threadripper IF they are that cpubound. Most likely they aren't.
Many people at home use professional renderers and video editors, especially those who are getting somewhere as a Youtuber. Granted, maybe not all of them are doing so legally, but I digress... Threadrippers are really only good as workhorses and nothing else, so, they're not exactly an ideal choice for a home PC (AMD doesn't even target home enthusiasts with TRs anymore). They also suck for gaming and are becoming stupidly expensive. The 12900K/13900K is a high-end jack of all trades: even though it's highly unnecessary for 99% of games, too power hungry at highly-parallel workloads, and a bit expensive, it is the kind of CPU that can push certain competitive games where others can't, it's "fast enough" to edit 15-minute videos, and it's cheaper than buying a full-blown workstation and a separate gaming PC. So - for those who want a machine that can work and play and have a big budget to blow, the 12900K/13900K is potentially a decent option, but the power draw is a problem. In any case, not everyone is doing all of the things I mentioned. You seemed to conveniently ignore the rest.
https://forums.guru3d.com/data/avatars/m/273/273323.jpg
schmidtbag:

I sure hope that they're comparing to 12th gen performance today. With all the scheduler issues, 12th gen was initially perceived to be a lot slower than it really was. 12% is actually pretty good when you consider Intel has obviously run out of ideas on how to further improve performance without pushing developers to use more instructions or threads. Keep in mind too Intel is burning a lot of time trying to make sure there aren't more security issues. I wouldn't be surprised if the E-cores is Intel's way of slowly phasing out HT, since that appears to be where most of their security issues come from. As for 14th gen, I'm not so certain IPC will be 25% faster, but I'd like to be proven wrong.
I'd really rather not lose HT / SMT if at all possible. Seems like a very useful technology for getting the most out of each performance core you've got no? If AMD can use SMT securely surely Intel can figure out HT right? Or are you suggesting eventually they'll both move to P and E core designs? I'm admittedly not very knowledgeable about this sort of thing so correct me if I'm wrong of course. Thanks,
https://forums.guru3d.com/data/avatars/m/251/251189.jpg
HT shouldn't be a security issue with any consumer workload. Generally, HT increases power efficiency, and p-cores still have higher efficiency than e-cores >20W total package power (and probably also well below that when setting voltages for both accordingly). Intel needs massive efficiency gains for either core design to be competitive vs. AMD in notebooks and servers, but there are Zen 4 3D and Zen 5 on the horizon. Dangerous times for Intel...
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
BlindBison:

I'd really rather not lose HT / SMT if at all possible. Seems like a very useful technology for getting the most out of each performance core you've got no? If AMD can use SMT securely surely Intel can figure out HT right? Or are you suggesting eventually they'll both move to P and E core designs? I'm admittedly not very knowledgeable about this sort of thing so correct me if I'm wrong of course. Thanks,
Nowadays with all the security mitigations, HT doesn't have much value anymore, especially in the server world. Seems like in a best case scenario, you'll get 25% more performance with HT enabled, but on average you'll only see about a 10% improvement. In some cases, HT will actually lower performance. Due to all the mitigations, HT is effectively disabled in some situations. AMD's approach to SMT is drastically different, which is why it didn't suffer the same vulnerabilities. As a result, its performance range swings from slightly worse to nearly twice as fast, depending on the workload. To my understanding (and maybe it's wrong/incomplete), Intel's method basically allows two threads to run at the same time in the same core, so long as there are resources to spare. AMD's method, meanwhile, has some dedicated transistors to both threads. AMD's method is wasteful and inefficient if there there can't be a secondary thread, but it is otherwise the more efficient method if it can be used. Intel's method is more insecure because both threads are sharing most of the core's resources. From what I gather, HT heavily depends on speculative branch prediction to work at its best, which is what the infamous Spectre vulnerability takes advantage of. I predict whenever AMD goes the P/E core route (and this does seem to be inevitable), they will drop SMT for the E-cores. In fact, AMD could probably just drop SMT and shrink the caches (no point in having such a big cache if your workload is halved) and that would probably be enough to make a substantial die shrink.
aufkrawall2:

HT shouldn't be a security issue with any consumer workload. Generally, HT increases power efficiency, and p-cores still have higher efficiency than e-cores >20W total package power (and probably also well below that when setting voltages for both accordingly). Intel needs massive efficiency gains for either core design to be competitive vs. AMD in notebooks and servers, but there are Zen 4 3D and Zen 5 on the horizon. Dangerous times for Intel...
Arguably yes, but there are a couple problems: 1. Desktop CPUs are just trickled-down server tech. Intel isn't gaining much with HT (or P cores, for that matter) in the server market, so as they lower their investment in that, we're going to see the same results. I'd say we already have. 2. HT actually draws more power but it's still more efficient than having two physical cores (granted, it's also a lot slower). 3. Intel's architecture is relatively efficient, but you'd never know because in order to stay competitive with AMD, they crank the clock speeds to such stupid levels and cripple the efficiency. AMD's chiplet design allows them to more affordably just keep throwing resources at the problem rather than clock speeds, which is why we get things like V-cache and oodles of performance cores. 4. You can't expect people to not run mission-critical tasks on a desktop CPU. People are stupid, and just because the average i5 isn't going to see a hacker exploit HT's vulnerabilities, you (or Intel, in particular) can't take that chance.