Processor cores mixed for desktop: Rumors about Intel Alder Lake-S processors with 8+8 cores

Published by

Click here to post a comment for Processor cores mixed for desktop: Rumors about Intel Alder Lake-S processors with 8+8 cores on our message forum
https://forums.guru3d.com/data/avatars/m/220/220188.jpg
i would just turn the slow cores off permanently anyway, most people buying desktop stuff would
https://forums.guru3d.com/data/avatars/m/270/270008.jpg
schmidtbag:

Boost clocks are not a way of solving this. I'm talking about cores that are physically different. Generally speaking, the more complex your architecture is, the lower it's maximum potential clock. If Intel's "little" cores lack a lot of advanced instructions (such as SSE4 or AVX), I'm sure you could squeeze at least another 1GHz out of them, while also drawing less power per clock.
I know what you are saying, what I'm saying is boost clocks solved the efficiency issues in desktop parts and this rumor is for a desktop part. This big little approach is pointless unless you are very thermally constrained and you aren't on desktop parts. You will get more performance with all big cores and downclocking them then boosting them when needed which is what is already out now. We can watch this one but im about 99.9% sure this leak is not on point unless we are talking a laptop part out 4-5 years from now on Intel's 5nm. I say that because on 5nm you can easily do 16 large core chips for a desktop chip so there really is no point to this concept outside of laptops, tablets, or phones.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Ricepudding:

I can see where you are coming from but least on the desktop market i think power is less of an issue, low-clock efficiency last i checked was very good, and intel do Y models which are made for just that low-clock power. to me it's an odd market, now if these were coming out on laptops I could totally see the market, i just fail to see the appeal on the desktop front.
For desktops, yes, low-clock efficiency with x86 is great. That's why I was saying the "small cores" could be heavily overclocked, because pushing a group of cores beyond 5GHz isn't efficient. On laptops, I don't know the efficiency of Intel's 10nm low-clock wattage but on 14nm it's utter garbage compared to ARM. There are no numbers on Zen2 laptop parts but I imagine those are pretty good.
I know the mobile market do it, and again same with Laptops i can totally see the appeal, having power if needed. But Desktops i feel dont have this power issue and they have low power models if needed.
The point I was trying to make is a more efficient core is a core that can overclock higher. If the smaller cores are made architecturally simpler, that unlocks more potential. For tasks that don't care about big caches or advanced instructions (or both), higher clock speeds can make a bigger performance difference than more cores or bigger pipelines per core. Since the CPU will have "big cores" too, that still gives people a choice of what kind of workload to run quickly and efficiently. It's a win-win.
I get your point but the heat comes from doing AVX workloads, and they normally can make a CPU downclock due to the heat. But removing it beyond maybe making the CPU less complex to make, i still think other instructions run into just as many heat issues.
Do you get my point? Because the very thing you just explained is exactly why a smaller/simpler core would be beneficial.
Same way running games on SSEX instructions can cause high temps even with no overclock. I don't see them removing instructions and it being a massive jump, unless they most anything high, in which case what would be the point in the CPU?
SSE4 is pretty intensive so that adds a lot of heat, but everything below that doesn't cause any major heat issues, and the vast majority of software doesn't need anything higher than SSE3.x. It will make a substantial difference.
You'll run it at what 6ghz but be able to run nothing on it, where is the point in that.
Name 1 real-world application that you run that depends on AVX. I'm not saying something that can use it, but actually requires it to function. Name 3 that depends on SSE4. Practically everything will run on these cores. The purpose of these cores isn't meant for such workloads anyway. On ARM, the purpose of these cores is to handle background tasks with efficiency, and on x86 laptops they'll definitely serve the same purpose. On x86 desktops, they could be used to crunch lots of simple calculations very quickly.
Now i can see them being cheaper and less power hungry, but unless all you want to do is run chrome which seems to work on SSE1/2 far as i could tell from a little google. I fail to see the joy in it. or more to the point what would be the need for the high GHZ. Most likely there is a reason they have not gone this route, and we can see that mobile processors running windows do have issues, good example of this is the Surface X which has to run older versions of adobe to work.
The cores are claimed to be smaller. If they're literally smaller, something was removed. As I said before: making a one-size-fits-all core is a bad idea. Some tasks are best not multi-threaded. Some tasks heavily depend on latency and frequency. Some tasks are woefully inefficient on a full-blown x86 core. There are many practical benefits to having simplified cores, even if the clock speeds aren't pushed to such extremes. There's not supposed to be "joy" in practicality.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
JamesSneed:

I know what you are saying, what I'm saying is boost clocks solved the efficiency issues in desktop parts and this rumor is for a desktop part. This big little approach is pointless unless you are very thermally constrained and you aren't on desktop parts. You will get more performance with all big cores and downclocking them then boosting them when needed which is what is already out now. We can watch this one but im about 99.9% sure this leak is not on point unless we are talking a laptop part out 4-5 years from now on Intel's 5nm. I say that because on 5nm you can easily do 16 large core chips for a desktop chip so there really is no point to this concept outside of laptops, tablets, or phones.
Well that's exactly it - on desktops, you're not really thermally constrained, so these smaller cores can be pushed to the limits, and those speed limits will top off at a much higher frequency than the "big cores". For single-threaded tasks, this could be very useful. On laptops, there is a thermal constraint and x86 does a crappy job at low-end performance-per-watt. These small cores are a benefit to high-end and low-end systems.
https://forums.guru3d.com/data/avatars/m/270/270041.jpg
schmidtbag:

Name 1 real-world application that you run that depends on AVX. I'm not saying something that can use it, but actually requires it to function. Name 3 that depends on SSE4. Practically everything will run on these cores. The purpose of these cores isn't meant for such workloads anyway. On ARM, the purpose of these cores is to handle background tasks with efficiency, and on x86 laptops they'll definitely serve the same purpose. On x86 desktops, they could be used to crunch lots of simple calculations very quickly. .
Don't some games and DX12 games starting to use the AVX instruction? i can see Crew 2 does, and assassins creed. so there is a few games, and maybe more only did a little looking into this. Like i said I can see where you are coming from, but the issue i have this is coming out for Desktop where I just see it being a mute point. Now if they was for Laptops or the new surface pro, fab. I can see having different cores and using ARM and X86 together could allow for some amazing things. I just don't see myself needing ARM style processor on a desktop. Nor do i trust windows to see what core is what to be honest haha. But guess we shall see if this comes out what they can do
data/avatar/default/avatar07.webp
AVX instructions are not directly related to DirectX graphics, in any version. You can compile a DX12 game and just use the base set of x64 (which includes up to SSE2). SSE3/3s/4a/4.1/4.2/AVX/AVX2 or some of the AVX-512 subsets are related only on the CPU side for mathematical computations.
data/avatar/default/avatar07.webp
schmidtbag:

Name 1 real-world application that you run that depends on AVX. I'm not saying something that can use it, but actually requires it to function. Name 3 that depends on SSE4. Practically everything will run on these cores. The purpose of these cores isn't meant for such workloads anyway. On ARM, the purpose of these cores is to handle background tasks with efficiency, and on x86 laptops they'll definitely serve the same purpose. On x86 desktops, they could be used to crunch lots of simple calculations very quickly.
There's a fair bunch of modern games that do require SSSE3/SSE4 nowadays. Apex Legends, Red Dead Redemption 2,, Control, Assassins Creed Origins and Odyssey (Origins actually also requires AVX still, at least that was patched away for Odyssey after launch), The SSE4 requirement is an annyoing limitation since even the old Phenom II X6 typically significantly outperforms the PS4 version in the other games.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Yxskaft:

There's a fair bunch of modern games that do require SSSE3/SSE4 nowadays. Apex Legends, Red Dead Redemption 2,, Control, Assassins Creed Origins and Odyssey (Origins actually also requires AVX still, at least that was patched away for Odyssey after launch), The SSE4 requirement is an annyoing limitation since even the old Phenom II X6 typically significantly outperforms the PS4 version in the other games.
Yes, but those games also don't depend on a lot of cores either. If you've got 8 "big cores" with HT, that's more than enough to handle pretty much any game in the next several years. Besides, not every thread in such games would require all the instructions. The small cores could be used to handle simpler tasks.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Ricepudding:

Don't some games and DX12 games starting to use the AVX instruction? i can see Crew 2 does, and assassins creed. so there is a few games, and maybe more only did a little looking into this.
Many things use features like AVX and SSE4. But just because something uses those, doesn't mean it depends on them. Many such things can still work without them, just slower. You were saying how things won't run on the smaller cores, but there are very few applications out there that require such instructions. But the whole point of having a CPU with 2 different sets of cores is that one set can use such features where needed, and the other set of cores handles other tasks that don't need them at a faster speed.
Like i said I can see where you are coming from, but the issue i have this is coming out for Desktop where I just see it being a mute point. Now if they was for Laptops or the new surface pro, fab. I can see having different cores and using ARM and X86 together could allow for some amazing things. I just don't see myself needing ARM style processor on a desktop.
I still don't think you understand what I'm saying if you don't see the benefit of a set of cores with an extremely high clock speed. Think of it like this: You can have a 500KW engine in a truck or you can have an engine with equal power in a sportscar. You aren't going to win races with the truck and you aren't going to tow a boat in the sportscar, but they both have the same power. In theory, they have equal capabilities but under different workloads. The "small" CPU cores are basically the sportscar where the "big" cores are the truck. On a desktop, they might use roughly the same wattage but they have woefully different capabilities for different workloads.
Nor do i trust windows to see what core is what to be honest haha. But guess we shall see if this comes out what they can do
That I very much agree with.
https://forums.guru3d.com/data/avatars/m/270/270041.jpg
schmidtbag:

I still don't think you understand what I'm saying if you don't see the benefit of a set of cores with an extremely high clock speed. Think of it like this: You can have a 500KW engine in a truck or you can have an engine with equal power in a sportscar. You aren't going to win races with the truck and you aren't going to tow a boat in the sportscar, but they both have the same power. In theory, they have equal capabilities but under different workloads. The "small" CPU cores are basically the sportscar where the "big" cores are the truck. On a desktop, they might use roughly the same wattage but they have woefully different capabilities for different workloads.
See I understand, they case uses for different things. But I still feel there would be massive limitations when it comes to thermals. I assume there is a big reason why Snapdragon even when placed inside a laptop still uses rather low hz cores, when they could go a lot higher due to less issues with battery. Until it is done, I am just very cynical about it. It sounds nice on paper but we have no real world to have a look at it from. Cause a lot of these chips can go higher LN2 does show that off, but it's very short bursts and well you need LN2 to achieve it due to heat, would removing some higher level instructions change that? Maybe? but we need to see that in the real world to prove it. Though running 7ghz on low level instructions might not even have any real world uses, most development is being done on higher instructions including AI learning. As I said, I do understand the pros and cons of this, but in the real world how much of a difference would it make, thats questionable. And just throwing GHZ at something sounds like what AMD did with the FX series and look how that went.
data/avatar/default/avatar26.webp
Someone here is missing the point that AVX2 and SSE3/4.2 are used to improve throughput and IPC, which means less clock cycles are wasted, which mean less power is wasted. Using a newer extension doesn't of course guarantee better performance and less power wasted, this is why new extensions and calls are usually profiled against older version and against different hardware (eg: DirectXMath doesn't use AVX2 everywhere just cause you can).
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Alessio1989:

Someone here is missing the point that AVX2 and SSE3/4.2 are used to improve throughput and IPC, which means less clock cycles are wasted, which mean less power is wasted. Using a newer extension doesn't of course guarantee better performance and less power wasted, this is why new extensions and calls are usually profiled against older version and against different hardware (eg: DirectXMath doesn't use AVX2 everywhere just cause you can).
I think you're missing the point that not every application uses such instructions... It doesn't matter how many extra instructions you throw into a CPU if they're not used in applications. Only in the past 3 or so years have developers started taking such instructions more seriously, but there's still a VERY long way to go.
data/avatar/default/avatar09.webp
schmidtbag:

I think you're missing the point that not every application uses such instructions... It doesn't matter how many extra instructions you throw into a CPU if they're not used in applications. Only in the past 3 or so years have developers started taking such instructions more seriously, but there's still a VERY long way to go.
You miss the point that if the ALU is not used then there is no power wasted. Making 2 CPUs just to cut SIMD instruction in one of those CPU it's a waste of Silicon, space and idle power.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Alessio1989:

You miss the point that if the ALU is not used then there is no power wasted. Making 2 CPUs just to cut SIMD instruction in one of those CPU it's a waste of Silicon, space and idle power.
Actually there is wasted power. x86 cores do use power when idle. This is one of the reasons why architectures like ARM and MIPS are used in phones, because they actually legitimately shut down idle cores, which saves enough power where the battery life is reasonable. Regardless, you are still missing the following points: * The more complex something is, the greater chance of failure. If the cores have fewer transistors, there is less that can go wrong and therefore there is more potential for it to be pushed to greater extremes. Those extremes work at both ends - you can have a CPU core sip power at a much lower wattage than it normally would, or, you can overclock it beyond what a bigger, more complex core could do. * Although I've mostly been talking about excess instruction sets, I'm also referring to things like caches. Caches take up a LOT of space on a CPU and their size has an immense impact on performance, depending on the workload. The bigger the cache, the slower it goes. But if you only run simple instructions, you don't need a big cache. So you can dramatically increase your IPC just by having simple instructions run on a smaller cache and a smaller pipeline. * The silicon wafers are circular, but the CPU dies are rectangular. The bigger the die, the fewer of them you can fit per-wafer. AMD's profit margins with Zen2 have been so high, not just because you can fit more dies on a 7nm process, but also because the area per chiplet is much smaller. If you do a chiplet design on 7nm with simplified cores, you can fit even more workable product on a single wafer. This is a big deal, especially when you're Intel and have a product shortage. * If the architecture is purpose-built for simple tasks, it can be fine-tuned to run more efficiently. * If the rumors are true and those cores really are physically smaller, clearly, what I'm saying has some truth to it.
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
Interesting i think is a good idea , and the small cores would be x86 i am almost sure of it .... I do not think intel would like to help arm adoption on desktop and laptop space 😛