Intel Lakefield CPU Combines fast and economical cores

Published by

Click here to post a comment for Intel Lakefield CPU Combines fast and economical cores on our message forum
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
But one problem remains: Can it run Crysis?
https://forums.guru3d.com/data/avatars/m/229/229509.jpg
They're taking hints from ARM. I guess it's all down to how different architectures scale with respect to performance per unit power at each clock speed.
https://forums.guru3d.com/data/avatars/m/239/239175.jpg
Wow. Can't wait. /s Intel is really worried about ARM CPUs. Some food for thought in this analysis: [youtube=IfHG7bj-CEI]
https://forums.guru3d.com/data/avatars/m/274/274977.jpg
Kaarme:

But one problem remains: Can it run Crysis?
I mean, that is the only real question here!
https://forums.guru3d.com/data/avatars/m/115/115462.jpg
So wait, now they are done with lakes and started using fields in their naming and the very first will be called LAKEFIELD? 😱
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
What I don't understand is why Intel (or AMD for that matter) hasn't done this big.LITTLE-like architecture years ago. Every task is different. Some work best with long single-threaded pipelines. Some can easily make do with short pipelines at low clocks. Some don't need any advanced instruction sets at all. Others work best multi-threaded. Having a single CPU with a variety of cores that excel at different workloads would really maximize efficiency. Such a CPU wouldn't be much interest to those with more constant workloads (like workstations or servers) but it'd be great for pretty much everything else. Both AMD and Intel (but mostly AMD) are leading us to believe that what we need is more cores, but what we really need are specialized cores. Despite what a lot of people think, many CPU-bound tasks are never going to become multi-threaded, nor should they. That's not to say having more cores is a bad thing, but rather, it's not the only thing we should be focusing on.
BLEH!:

They're taking hints from ARM. I guess it's all down to how different architectures scale with respect to performance per unit power at each clock speed.
Kinda gets me to wonder why they didn't take these hints the first time around. ARM is successful in their market for a reason, and Intel just completely ignored all of the reasons why during their first attempt at mobile processors. I'm skeptical they actually learned and understood what they did wrong the first time.
Solfaur:

So wait, now they are done with lakes and started using fields in their naming and the very first will be called LAKEFIELD? 😱
Are they done with lakes? This a different product lineup. Either way, funny observation.
https://forums.guru3d.com/data/avatars/m/229/229509.jpg
schmidtbag:

Kinda gets me to wonder why they didn't take these hints the first time around. ARM is successful in their market for a reason, and Intel just completely ignored all of the reasons why during their first attempt at mobile processors. I'm skeptical they actually learned and understood what they did wrong the first time.
Are the Atom cores any good, though?
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
BLEH!:

Are the Atom cores any good, though?
From Intel's first attempt? No, not at all. At least, they weren't any good at what they were supposed to be. As for Lakefield, there's not enough info for anyone to make judgment.
https://forums.guru3d.com/data/avatars/m/270/270233.jpg
schmidtbag:

Having a single CPU with a variety of cores that excel at different workloads would really maximize efficiency.
I dunno, I think it would most likely use up a lot of precious die space and see very little use in desktop systems. People already complain about the amount of die space that Intel's iGPUs use, saying it could be used for more cores instead. The big.LITTLE design makes sense for smartphones and tablets, which need to maximize battery life. Not so much for desktops (laptops are a different story, but laptops need to cater to desktop tasks as well).
https://forums.guru3d.com/data/avatars/m/175/175902.jpg
BLEH!:

Are the Atom cores any good, though?
if it's like the 8 core one we have at work: yes it does it not so slow it was expected ( 😀 lol ) and with very friendly energy efficience. This should be nice as most user doesn't use all the core of their computer (exemple my wife use 2T at max on her 4C\8T... what a waste of money lol)
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
schmidtbag:

What I don't understand is why Intel (or AMD for that matter) hasn't done this big.LITTLE-like architecture years ago. Every task is different. Some work best with long single-threaded pipelines. Some can easily make do with short pipelines at low clocks. Some don't need any advanced instruction sets at all. Others work best multi-threaded. Having a single CPU with a variety of cores that excel at different workloads would really maximize efficiency. Such a CPU wouldn't be much interest to those with more constant workloads (like workstations or servers) but it'd be great for pretty much everything else. Both AMD and Intel (but mostly AMD) are leading us to believe that what we need is more cores, but what we really need are specialized cores. Despite what a lot of people think, many CPU-bound tasks are never going to become multi-threaded, nor should they. That's not to say having more cores is a bad thing, but rather, it's not the only thing we should be focusing on.
Intel probably thought they could just shove their modern architecture into a phone and abuse their process advantage to compete with ARM. It's way cheaper and they came pretty close but ARM clearly won. Also remember that Apple has had a massive performance/power advantage on the CPU for a long time now and they only recently moved to big.little with the A10 in 2016. So I think a lot of engineers were questioning if it was even necessary at the time.
D3M1G0D:

I dunno, I think it would most likely use up a lot of precious die space and see very little use in desktop systems. People already complain about the amount of die space that Intel's iGPUs use, saying it could be used for more cores instead. The big.LITTLE design makes sense for smartphones and tablets, which need to maximize battery life. Not so much for desktops (laptops are a different story, but laptops need to cater to desktop tasks as well).
I think he's speaking in reference to Lakefield which is a mobile processor.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
D3M1G0D:

I dunno, I think it would most likely use up a lot of precious die space and see very little use in desktop systems. People already complain about the amount of die space that Intel's iGPUs use, saying it could be used for more cores instead. The big.LITTLE design makes sense for smartphones and tablets, which need to maximize battery life. Not so much for desktops (laptops are a different story, but laptops need to cater to desktop tasks as well).
The difference here is the iGPU doesn't do anything for most desktop users. It's literally wasted space and wasted money. I think you somewhat misunderstand what I'm saying though (to be fair, I didn't explain it very well): In the hypothetical CPU I'm thinking of, there would be at least 2 different kinds of cores that would have roughly the same amount of FLOPs but are structured very differently from each other, where you can maximize performance given a specific task. So for example, there could be a set of cores with a short pipeline, limited instructions, no SMT, and each core can adjust its clocks independently. I figure such cores would be able to clock pretty high, since they're not very complex, but, they would scale down to very low (sub-GHz) speeds efficiently too. These cores would be ideal for background tasks, scripted languages, some games, and basic programs that don't constantly churn data. Then, there would be another set of cores with complex instructions, SMT, long pipelines, a more narrow frequency range, and maybe a difference in how cache works. These cores are pretty much for foreground tasks that handle a lot of advanced calculations, like encoding/decoding, compiling, rendering, etc. Note how both of these functions are traditionally for CPUs, and should stay that way, so I'm not suggesting there be another separate processor, in the way that a GPU functions.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Denial:

Intel probably thought they could just shove their modern architecture into a phone and abuse their process advantage to compete with ARM. It's way cheaper and they came pretty close but ARM clearly won. Also remember that Apple has had a massive performance/power advantage on the CPU for a long time now and they only recently moved to big.little with the A10 in 2016. So I think a lot of engineers were questioning if it was even necessary at the time.
Yup pretty much what I was thinking. Haha nowadays, Mac (not sure about iOS) is horrendously slow compared to Windows and Linux. I'm not entirely sure if performance-per-watt is better or not, but I assume it's not when the exact same task runs much slower on Mac than it does on other OSes.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
schmidtbag:

Yup pretty much what I was thinking. Haha nowadays, Mac (not sure about iOS) is horrendously slow compared to Windows and Linux. I'm not entirely sure if performance-per-watt is better or not, but I assume it's not when the exact same task runs much slower on Mac than it does on other OSes.
Yeah I'm speaking strictly of mobile where Apple's mobile SoC's are significantly faster than the ARM competition: https://www.anandtech.com/show/13392/the-iphone-xs-xs-max-review-unveiling-the-silicon-secrets/6 Even before they moved to big.little though they've almost always had a 50% performance advantage over competing ARM SoC's with half the core count. I also recall Ryan Shrout and them from PC Perspective (who ironically all work at Intel now) talking with Qualcomm engineers about how difficult Big.Little was to implement on the scheduler/software side and how it took them a few generations to even see an advantage from utilizing it. So I think someone like Intel looking from the outside was saying "hey we need to compete with mobile, Apple doesn't need big.little to do it and we have a process advantage, it should be no problem!" then they failed massively and ended up pulling completely out until now.
schmidtbag:

The difference here is the iGPU doesn't do anything for most desktop users. It's literally wasted space and wasted money. I think you somewhat misunderstand what I'm saying though (to be fair, I didn't explain it very well): In the hypothetical CPU I'm thinking of, there would be at least 2 different kinds of cores that would have roughly the same amount of FLOPs but are structured very differently from each other, where you can maximize performance given a specific task. So for example, there could be a set of cores with a short pipeline, limited instructions, no SMT, and each core can adjust its clocks independently. I figure such cores would be able to clock pretty high, since they're not very complex, but, they would scale down to very low (sub-GHz) speeds efficiently too. These cores would be ideal for background tasks, scripted languages, some games, and basic programs that don't constantly churn data. Then, there would be another set of cores with complex instructions, SMT, long pipelines, a more narrow frequency range, and maybe a difference in how cache works. These cores are pretty much for foreground tasks that handle a lot of advanced calculations, like encoding/decoding, compiling, rendering, etc. Note how both of these functions are traditionally for CPUs, and should stay that way, so I'm not suggesting there be another separate processor, in the way that a GPU functions.
I mean they do already kind of do this on desktop - it's basically what AVX is.. but I agree going forward I think we'll see more specialized core designs as simply scaling to 16-32 threads is not doable in a lot of workloads.
https://forums.guru3d.com/data/avatars/m/229/229509.jpg
schmidtbag:

From Intel's first attempt? No, not at all. At least, they weren't any good at what they were supposed to be. As for Lakefield, there's not enough info for anyone to make judgment.
It's the question of is the user going to be sitting in front of it yelling "hurry up you stupid machine!"? :P
rl66:

if it's like the 8 core one we have at work: yes it does it not so slow it was expected ( 😀 lol ) and with very friendly energy efficience. This should be nice as most user doesn't use all the core of their computer (exemple my wife use 2T at max on her 4C\8T... what a waste of money lol)
Energy efficient doesn't mean fast :P I went from X58/980X @ 4.13 GHz, with a motherboard alone that drew 100-odd watts to my new X99 system where the power dropped by a factor of 3! I probably don't use it to full potential, but some overhead is good!
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
BLEH!:

It's the question of is the user going to be sitting in front of it yelling "hurry up you stupid machine!"? 😛
Well, that, and/or "this is burning my leg!" as well as "seriously, 40% battery life!? I just charged this an hour ago!"
https://forums.guru3d.com/data/avatars/m/103/103120.jpg
schmidtbag:

From Intel's first attempt? No, not at all. At least, they weren't any good at what they were supposed to be. As for Lakefield, there's not enough info for anyone to make judgment.
There were 2 Atom attempts. One was over decade old CPUs for notebooks. That attempt wasn't good at all. Second attempt was smartphone/tablet Atom SoCs. They were pretty good, better than at the time ARM competitors, but they failed in marketing, so they suspended the model row. Morganfield and Willow Trail were cancelled. Now they are resuming SoCs with P1275 process.
https://forums.guru3d.com/data/avatars/m/103/103120.jpg
schmidtbag:

The difference here is the iGPU doesn't do anything for most desktop users.
Do not mix up most desktop users with gaming users. Most desktop users do benefit a lot with light integrated GPU. For gamer users, once Intel released gaming dGPU, they could do DX12 explicit multi-adapter support. It depends on gaming support, but by the time DX12 will gain popularity. So you will have your several fps increase.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
coth:

One was over decade old CPUs for notebooks. That attempt wasn't good at all. Second attempt was smartphone/tablet Atom SoCs. They were pretty good, better than at the time ARM competitors, but they failed in marketing, so they suspended the model row. Morganfield and Willow Trail were cancelled. Now they are resuming SoCs with P1275 process.
Huh? The 2nd attempt was atrocious. They were horribly power hungry (even when idle) and their performance-per-watt suffered when trying to make battery life better. Intel can shove themselves anywhere they want and make decent sales so long as the product is "decent" or better. The reason this platform failed is because it was an inferior competitor to what ARM had to offer.
coth:

Do not mix up most desktop users with gaming users. Most desktop users do benefit a lot with light integrated GPU. For gamer users, once Intel released gaming dGPU, they could do DX12 explicit multi-adapter support. It depends on gaming support, but by the time DX12 will gain popularity. So you will have your several fps increase.
Fair enough, but that was a little besides the point anyway. I was more addressing the comment specific to D3M1G0D's point about people who feel the iGPU is wasted die space.
https://forums.guru3d.com/data/avatars/m/229/229509.jpg
schmidtbag:

Well, that, and/or "this is burning my leg!" as well as "seriously, 40% battery life!? I just charged this an hour ago!"
Never were truer words said!