Intel Lakefield CPU Combines fast and economical cores
Intel has been talking about Lakefield, the processor will use different stacked CPU cores inside the package, making it a hybrid design based on Foveros technology. The processor only measures 12 by 12 mm and will get one main core and four atom cores combined with a chipset and LPDDR4X.
Intel presented the Lakefield chip at the CES 2019, it is intended for convertibles and uses a design of several vertically stacked dies in an effort to achieve performance with high efficiency in the smallest possible space. Lakefield in idle only would use 2 milliwatts.
The design consists of three parts and is strongly reminiscent of those used in smartphones, with one big difference: instead of putting memory on a die, Intel pairs two all managed by the so-called 3D Foveros packaging technology, which basically is 3D stacking to connect multiple chiplets. Intel uses an interposer, which is produced in a 22FFL process and contains I/O functions such as SATA or USB. On top of that, through-contacted (TSV) there will be a 10nm based compute die as well as the RAM controller with a 64-bit interface, and at the top then the LPDDR4X main memory as a classic PoP (Package on Package). Intel previously strictly differentiated between core and Atom processors, the Compute-Die combines these two types of x86 CPU cores; a bit like ARM's big-LITTLE methodology. A Sunny Cove core, that's the name of the architecture of the upcoming Ice Lake chips, is expected next to four Tremont cores (next-gen Atom cores). The five cores will share 4 MB of L3 cache and are tied to a Gen11 GT2 integrated graphics unit with 64 execution units.
Intel Launches Graphics Twitter Handle and New Video - 08/15/2018 10:03 PM
Intel posted a new video, promising a dedicated graphics card by 2020. Intel would like to keep you abreast of the latest development of Intel in graphics. At Siggraph, their team is starting a new Tw...
Intel Launches New Generation Xeon E Processor Family - 07/12/2018 06:58 PM
Intel today announced the release of the new Intel Xeon E-2100 processor. The Intel Xeon E processor, successor to the Intel Xeon E3 processor, is designed for entry-level workstations that provide cr...
Intel Larrabee GPU designer rejoins Intel GPU Team - 06/20/2018 02:52 PM
It's been roughly a decade, but Tom Forsyth was the man behind Larrabee if you can remember it, you need to go back to the year 2007 for the first Larrabee rumors. Forsyth will be teaming up with ...
Intel launches the first 5.0 GHz CPU: Core i7-8086K (Kinda Silent) - 06/05/2018 08:04 AM
With Intel’s 50th anniversary next month, it’s a perfect time to celebrate one of the most important technologies of Intel’s legacy: the PC. In a mile-long press release ...
Intel launches ten new Coffee Lake Xeon chips - 05/30/2018 05:57 PM
Intel is releasing ten new Xeon processors, all based on Coffee Lake, the procs are located in the Xeon E-series - the successor of Xeon E3. The chips pretty much similar to the consumer chips inclu...
Senior Member
Posts: 7431
Joined: 2012-11-10
The difference here is the iGPU doesn't do anything for most desktop users. It's literally wasted space and wasted money. I think you somewhat misunderstand what I'm saying though (to be fair, I didn't explain it very well):
In the hypothetical CPU I'm thinking of, there would be at least 2 different kinds of cores that would have roughly the same amount of FLOPs but are structured very differently from each other, where you can maximize performance given a specific task. So for example, there could be a set of cores with a short pipeline, limited instructions, no SMT, and each core can adjust its clocks independently. I figure such cores would be able to clock pretty high, since they're not very complex, but, they would scale down to very low (sub-GHz) speeds efficiently too. These cores would be ideal for background tasks, scripted languages, some games, and basic programs that don't constantly churn data.
Then, there would be another set of cores with complex instructions, SMT, long pipelines, a more narrow frequency range, and maybe a difference in how cache works. These cores are pretty much for foreground tasks that handle a lot of advanced calculations, like encoding/decoding, compiling, rendering, etc.
Note how both of these functions are traditionally for CPUs, and should stay that way, so I'm not suggesting there be another separate processor, in the way that a GPU functions.
Senior Member
Posts: 7431
Joined: 2012-11-10
Yup pretty much what I was thinking.
Haha nowadays, Mac (not sure about iOS) is horrendously slow compared to Windows and Linux. I'm not entirely sure if performance-per-watt is better or not, but I assume it's not when the exact same task runs much slower on Mac than it does on other OSes.
Senior Member
Posts: 14091
Joined: 2004-05-16
Yup pretty much what I was thinking.
Haha nowadays, Mac (not sure about iOS) is horrendously slow compared to Windows and Linux. I'm not entirely sure if performance-per-watt is better or not, but I assume it's not when the exact same task runs much slower on Mac than it does on other OSes.
Yeah I'm speaking strictly of mobile where Apple's mobile SoC's are significantly faster than the ARM competition:
https://www.anandtech.com/show/13392/the-iphone-xs-xs-max-review-unveiling-the-silicon-secrets/6
Even before they moved to big.little though they've almost always had a 50% performance advantage over competing ARM SoC's with half the core count. I also recall Ryan Shrout and them from PC Perspective (who ironically all work at Intel now) talking with Qualcomm engineers about how difficult Big.Little was to implement on the scheduler/software side and how it took them a few generations to even see an advantage from utilizing it. So I think someone like Intel looking from the outside was saying "hey we need to compete with mobile, Apple doesn't need big.little to do it and we have a process advantage, it should be no problem!" then they failed massively and ended up pulling completely out until now.
The difference here is the iGPU doesn't do anything for most desktop users. It's literally wasted space and wasted money. I think you somewhat misunderstand what I'm saying though (to be fair, I didn't explain it very well):
In the hypothetical CPU I'm thinking of, there would be at least 2 different kinds of cores that would have roughly the same amount of FLOPs but are structured very differently from each other, where you can maximize performance given a specific task. So for example, there could be a set of cores with a short pipeline, limited instructions, no SMT, and each core can adjust its clocks independently. I figure such cores would be able to clock pretty high, since they're not very complex, but, they would scale down to very low (sub-GHz) speeds efficiently too. These cores would be ideal for background tasks, scripted languages, some games, and basic programs that don't constantly churn data.
Then, there would be another set of cores with complex instructions, SMT, long pipelines, a more narrow frequency range, and maybe a difference in how cache works. These cores are pretty much for foreground tasks that handle a lot of advanced calculations, like encoding/decoding, compiling, rendering, etc.
Note how both of these functions are traditionally for CPUs, and should stay that way, so I'm not suggesting there be another separate processor, in the way that a GPU functions.
I mean they do already kind of do this on desktop - it's basically what AVX is.. but I agree going forward I think we'll see more specialized core designs as simply scaling to 16-32 threads is not doable in a lot of workloads.
Senior Member
Posts: 6365
Joined: 2010-10-17
From Intel's first attempt? No, not at all. At least, they weren't any good at what they were supposed to be.
As for Lakefield, there's not enough info for anyone to make judgment.
It's the question of is the user going to be sitting in front of it yelling "hurry up you stupid machine!"?

if it's like the 8 core one we have at work: yes it does it not so slow it was expected (

This should be nice as most user doesn't use all the core of their computer (exemple my wife use 2T at max on her 4C\8T... what a waste of money lol)
Energy efficient doesn't mean fast

Senior Member
Posts: 14091
Joined: 2004-05-16
What I don't understand is why Intel (or AMD for that matter) hasn't done this big.LITTLE-like architecture years ago. Every task is different. Some work best with long single-threaded pipelines. Some can easily make do with short pipelines at low clocks. Some don't need any advanced instruction sets at all. Others work best multi-threaded. Having a single CPU with a variety of cores that excel at different workloads would really maximize efficiency. Such a CPU wouldn't be much interest to those with more constant workloads (like workstations or servers) but it'd be great for pretty much everything else.
Both AMD and Intel (but mostly AMD) are leading us to believe that what we need is more cores, but what we really need are specialized cores. Despite what a lot of people think, many CPU-bound tasks are never going to become multi-threaded, nor should they. That's not to say having more cores is a bad thing, but rather, it's not the only thing we should be focusing on.
Intel probably thought they could just shove their modern architecture into a phone and abuse their process advantage to compete with ARM. It's way cheaper and they came pretty close but ARM clearly won. Also remember that Apple has had a massive performance/power advantage on the CPU for a long time now and they only recently moved to big.little with the A10 in 2016. So I think a lot of engineers were questioning if it was even necessary at the time.
I dunno, I think it would most likely use up a lot of precious die space and see very little use in desktop systems. People already complain about the amount of die space that Intel's iGPUs use, saying it could be used for more cores instead. The big.LITTLE design makes sense for smartphones and tablets, which need to maximize battery life. Not so much for desktops (laptops are a different story, but laptops need to cater to desktop tasks as well).
I think he's speaking in reference to Lakefield which is a mobile processor.