Intel confirms large LGA1700 socket for Alder Lake In Developer Document

Published by

Click here to post a comment for Intel confirms large LGA1700 socket for Alder Lake In Developer Document on our message forum
https://forums.guru3d.com/data/avatars/m/232/232130.jpg
If this comes after Rocket Lake (which supposed to be yet another 14nm) and it has DDR5... Intel will finally move from 14nm around 2022...
data/avatar/default/avatar26.webp
As long as it brings improvements and can be air cooled, is good.
data/avatar/default/avatar04.webp
How many lakes are left yet? 😀
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
gdeliana:

How many lakes are left yet? 😀
There has not been VodkaLake. So there are few names left.
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
sverek:

If this comes after Rocket Lake (which supposed to be yet another 14nm) and it has DDR5... Intel will finally move from 14nm around 2022...
Then they would be competing with Zen 4. Since 32 cores is probably close to the max for any mainstream socket, I wonder what they do next (both of them).
data/avatar/default/avatar05.webp
Fox2232:

There has not been VodkaLake. So there are few names left.
They passed already the whiskey lake 😀, they are towards vodka now 😀
https://forums.guru3d.com/data/avatars/m/115/115462.jpg
At this point I believe that Intel will only drop the "lake" naming when they finally drop 14nm, let's hope they won't run out of lakes until then... 😛
data/avatar/default/avatar15.webp
PrMinisterGR:

Then they would be competing with Zen 4. Since 32 cores is probably close to the max for any mainstream socket, I wonder what they do next (both of them).
They should get into software and write application and OS to support all those cores. Game engines too.
https://forums.guru3d.com/data/avatars/m/250/250418.jpg
PrMinisterGR:

Then they would be competing with Zen 4. Since 32 cores is probably close to the max for any mainstream socket, I wonder what they do next (both of them).
I'm curious on why you think 32 cores is the max for mainstream. Probably four 5nm CPU dies and a 7nm IO fit in a package, but what tells us that's the limit? If AMD does smaller cores it could fit 6 on a CCX for 12 core dies. Then 4 dies could give us 48 cores on a mainstream CPU. Meanwhile TSMC is developing other smaller nodes that could improve the space on the package and fit more stuff on the dies.
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
Silva:

I'm curious on why you think 32 cores is the max for mainstream. Probably four 5nm CPU dies and a 7nm IO fit in a package, but what tells us that's the limit? If AMD does smaller cores it could fit 6 on a CCX for 12 core dies. Then 4 dies could give us 48 cores on a mainstream CPU. Meanwhile TSMC is developing other smaller nodes that could improve the space on the package and fit more stuff on the dies.
There is a hard stop with physics at around 1-2nm, and even before that there are huge issues with heat density, which we see already both in TSMC 7nm, and Intel's 10nm. So unless huge fundamentals change, there is a stop. 48 cores are kind of useless for most tasks. I would argue that the best would be to start shipping with specialized hardware. It could be an I/O controller, like the new consoles have, or specific accelerators. I honestly cannot see any use for more than 32 cores on a desktop for the next five years, unless anything fundamental changes with software.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
PrMinisterGR:

There is a hard stop with physics at around 1-2nm, and even before that there are huge issues with heat density, which we see already both in TSMC 7nm, and Intel's 10nm. So unless huge fundamentals change, there is a stop. 48 cores are kind of useless for most tasks. I would argue that the best would be to start shipping with specialized hardware. It could be an I/O controller, like the new consoles have, or specific accelerators. I honestly cannot see any use for more than 32 cores on a desktop for the next five years, unless anything fundamental changes with software.
Heat density limits only clock. Because heat if function of both clock (cycles of 1/0 flops) and voltage to achieve such flop rate. While we do not generally need that many cores on "desktops" as you wrote, we could have 64 Zen2 cores easily at something like 2,2~2.4GHz (on all cores) within 160~200W depending on workload type. Sure, there is going to be hard limit. But from each density jump and improvement we see marketing like 10% higher clock at same power, or 30% higher power efficiency at same clock as before. (Which proves to be truthful.) That's likely what will apply to N5 vs N7 from TSMC too. We have seen AMD's claims about exceeding power efficiency targets. If we look at improvement from manufacturing process, there is likely another big saving from architecture optimizations. I would not be surprised if we see Zen3 8C/16T laptops outperforming Zen+ desktops at 1/4th~1/3rd of power draw.
https://forums.guru3d.com/data/avatars/m/250/250418.jpg
PrMinisterGR:

There is a hard stop with physics at around 1-2nm, and even before that there are huge issues with heat density, which we see already both in TSMC 7nm, and Intel's 10nm. So unless huge fundamentals change, there is a stop. 48 cores are kind of useless for most tasks. I would argue that the best would be to start shipping with specialized hardware. It could be an I/O controller, like the new consoles have, or specific accelerators. I honestly cannot see any use for more than 32 cores on a desktop for the next five years, unless anything fundamental changes with software.
Would you say you needed more than 4 cores 10 years ago? I think that way of thinking is flawed because generally it's the software that catches up with the hardware. Most software for work already takes advantage of multiple threads and is improving every day. As for games, they're usually gimped by consoles that have been low thread count up until the next generation: both new consoles will have 16 threads and allot of GPU power, games will evolve fast in 2021 to take full advantage and by 2022 we will be needing PC's with those sweet 16 threads minimum to run the ports.
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
Fox2232:

Heat density limits only clock. Because heat if function of both clock (cycles of 1/0 flops) and voltage to achieve such flop rate. While we do not generally need that many cores on "desktops" as you wrote, we could have 64 Zen2 cores easily at something like 2,2~2.4GHz (on all cores) within 160~200W depending on workload type. Sure, there is going to be hard limit. But from each density jump and improvement we see marketing like 10% higher clock at same power, or 30% higher power efficiency at same clock as before. (Which proves to be truthful.) That's likely what will apply to N5 vs N7 from TSMC too. We have seen AMD's claims about exceeding power efficiency targets. If we look at improvement from manufacturing process, there is likely another big saving from architecture optimizations. I would not be surprised if we see Zen3 8C/16T laptops outperforming Zen+ desktops at 1/4th~1/3rd of power draw.
This is true. There is also the fact that we can be stuck at a nominal 3nm, but then more and more of the layers of the chips still being made in the nominal 3nm node, get the actual 3nm treatment. Do you really see more than a 10x in pure performance the way it's going? I'm not so optimistic. But you have good points.
Silva:

Would you say you needed more than 4 cores 10 years ago? I think that way of thinking is flawed because generally it's the software that catches up with the hardware. Most software for work already takes advantage of multiple threads and is improving every day. As for games, they're usually gimped by consoles that have been low thread count up until the next generation: both new consoles will have 16 threads and allot of GPU power, games will evolve fast in 2021 to take full advantage and by 2022 we will be needing PC's with those sweet 16 threads minimum to run the ports.
Technically, you would still be fine with a quadcore with perfect hyperthreading. A lot of problems can be paralellized, but in the end, not all. In fact, most cannot and that's a hard mathematical stop also. We can expand to things like physics and audio, but in those cases specially-designed ASICs are much better than general purpose CPUs. That's for most problems. AI is the same, a matrix accelerator makes much more sense than a dedicated CPU. AI/Audio/Graphics/IO, are all done better with specialized hardware, so really what's the point of the beyond 32 core of "real" CPUs, at least for the forseeable decade? The pace of change is bound to slow down even more, and I still have a CPU from eight years ago that can do most things fairly competently. Can you really see a 32-Core Zen 3, realistically running out of juice before the next decade is out? Unless accelerators become more common, I cannot.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
PrMinisterGR:

This is true. There is also the fact that we can be stuck at a nominal 3nm, but then more and more of the layers of the chips still being made in the nominal 3nm node, get the actual 3nm treatment. Do you really see more than a 10x in pure performance the way it's going? I'm not so optimistic. But you have good points.
That's really hard question to answer. I remember having Athlon XP 2200+ at some 1800MHz eating 65W. That was just one core at comparably low clock. Today we have 1C/2T at above 4GHz eating less than 10W. (And difference in transistor count is just...) When I look at Ryzen 7 2700X, My highest stable OC at reasonable voltage ate 220W under full load. Yet, Ryzen 9 3900X has 50% more cores, 80% higher performance at around 1/2 of power draw. I can't predict achievable clock of next node, nor voltage and therefore no assumptions about power efficiency. Just experience and hope that technology can move further. If we were talking about intel, I would just ask: "Is this their new architecture? If not, just don't expect meaningful improvements." But AMD does improve each Zen generation in considerable way. And while one can call it bold experimenting, it is progress and they do learn from it. I am not even sure if Chiplet approach can survive to Zen4. Maybe it will be used for 16C/32T and maybe, it will all inside one CCD with IMC and no CCX to be found. If there was actual leak on what's planed for Zen3 and 4 as biggest sources of improvements, we could be as surprised as when Chiplets and I/O die got confirmed. Or when we realized that there is space for 2nd CCD under IHS of Zen2.
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
Fox2232:

That's really hard question to answer. I remember having Athlon XP 2200+ at some 1800MHz eating 65W. That was just one core at comparably low clock. Today we have 1C/2T at above 4GHz eating less than 10W. (And difference in transistor count is just...) When I look at Ryzen 7 2700X, My highest stable OC at reasonable voltage ate 220W under full load. Yet, Ryzen 9 3900X has 50% more cores, 80% higher performance at around 1/2 of power draw. I can't predict achievable clock of next node, nor voltage and therefore no assumptions about power efficiency. Just experience and hope that technology can move further.
I was more talking about the real hard stop, 0.2nm. Intel already has issues with deviations of one atom, and that's at 14nm. Listen to this video at 5:42 [youtube=f0gMdGrVteI]
Fox2232:

If we were talking about intel, I would just ask: "Is this their new architecture? If not, just don't expect meaningful improvements." But AMD does improve each Zen generation in considerable way. And while one can call it bold experimenting, it is progress and they do learn from it. I am not even sure if Chiplet approach can survive to Zen4. Maybe it will be used for 16C/32T and maybe, it will all inside one CCD with IMC and no CCX to be found. If there was actual leak on what's planed for Zen3 and 4 as biggest sources of improvements, we could be as surprised as when Chiplets and I/O die got confirmed. Or when we realized that there is space for 2nd CCD under IHS of Zen2.
I would argue that there is no going back from chiplets. They make things way too convenient to be abandoned. EPYC already can combine external accelerators using them, and so does Xeon. If anything a 16 core CPU with dedicated audio, physics, ai, network and IO chiplets, would be a much better purchase than a 32-core one.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
PrMinisterGR:

I was more talking about the real hard stop, 0.2nm. Intel already has issues with deviations of one atom, and that's at 14nm. Listen to this video at 5:42 [youtube=f0gMdGrVteI]
This is not correct. And then they do not talk about issues from single atom deviations, but about ability to measure single atom deviations. And that they are doing their best to keep control on that level of dimensions... (atoms) That does not mean one atom here or there will result in defect. And definitely not on 14nm.
PrMinisterGR:

I would argue that there is no going back from chiplets. They make things way too convenient to be abandoned. EPYC already can combine external accelerators using them, and so does Xeon. If anything a 16 core CPU with dedicated audio, physics, ai, network and IO chiplets, would be a much better purchase than a 32-core one.
Video says it for you. When you can cram twice as many transistors into same area, you can integrate. Sure, today 16C/32T+all you wrote is doable in one chip and for more one is wise to use chiplets. But double transistor density and suddenly you have different options. And when we go to that real hard stop 0.2nm. I think that we'll be able to get 32C/64T CPU in area of one chip that will be economically viable. Maybe even on 3nm.
https://forums.guru3d.com/data/avatars/m/250/250418.jpg
@PrMinisterGR Technically, I couldn't play any new games (Battlefield 1 with more than 20 people on a server, +30 would stutter and +40 was unplayable) on my i5 2500k because it would hit 100% usage and bottleneck my GPU hard: even a R9 270X. I know, not an HT i7 but that wouldn't solve the issue as HT isn't the same as a real core. As soon as the new challenges with having so many cores are solved and they scale perfectly, you'll have games that use all threads in less than half a decade. New generation of consoles will pave the way for new and better games that will take advantage of these new CPUs. It's not hard to make software to use the new resources, what is hard is to schedule the use of said resources. Unreal engine is paving the way with some new tech that scales LOD automatically, those engines will make the life of game creators allot easier and automated. Eventually, the engine alone will allocate automatically all the resources and the creator, just creates the game!
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
The thing is, there is one thread that needs to do the synchronization at the end, and there are math problems that can only be solved linearly.