Guru3D.com
  • HOME
  • NEWS
    • Channels
    • Archive
  • DOWNLOADS
    • New Downloads
    • Categories
    • Archive
  • GAME REVIEWS
  • ARTICLES
    • Rig of the Month
    • Join ROTM
    • PC Buyers Guide
    • Guru3D VGA Charts
    • Editorials
    • Dated content
  • HARDWARE REVIEWS
    • Videocards
    • Processors
    • Audio
    • Motherboards
    • Memory and Flash
    • SSD Storage
    • Chassis
    • Media Players
    • Power Supply
    • Laptop and Mobile
    • Smartphone
    • Networking
    • Keyboard Mouse
    • Cooling
    • Search articles
    • Knowledgebase
    • More Categories
  • FORUMS
  • NEWSLETTER
  • CONTACT

New Reviews
ASUS GeForce RTX 4080 Noctua OC Edition review
MSI Clutch GM51 Wireless mouse review
ASUS ROG STRIX B760-F Gaming WIFI review
Asus ROG Harpe Ace Aim Lab Edition mouse review
SteelSeries Arctis Nova Pro Headset review
Ryzen 7800X3D preview - 7950X3D One CCD Disabled
MSI VIGOR GK71 SONIC Blue keyboard review
AMD Ryzen 9 7950X3D processor review
FSP Hydro G Pro 1000W (ATX 3.0, 1000W PSU) review
Addlink S90 Lite 2TB NVMe SSD review

New Downloads
Intel ARC graphics Driver Download Version: 31.0.101.4148
GeForce 531.29 WHQL driver download
CrystalDiskInfo 9.0.0 Beta3 Download
AMD Ryzen Master Utility Download 2.10.2.2367
AMD Radeon Software Adrenalin 23.3.1 WHQL download
Display Driver Uninstaller Download version 18.0.6.1
CPU-Z download v2.05
AMD Chipset Drivers Download 5.02.19.2221
GeForce 531.18 WHQL driver download
ReShade download v5.7.0


New Forum Topics
NVIDIA Profile Inspector 2.4.0.3 Info Zone - gEngines, Ray Tracing, DLSS, DLAA, TSR, FSR, XeSS, DLDSR etc. RTX 4080 Owner's Thread AMD Software: Adrenalin Edition 23.3.1 WHQL - Driver Download and Discussion Failed 8,3 Years old WD Red drive 3TB (EFRX) - what now...? Review: ASUS GeForce RTX 4080 Noctua OC Edition Performance boost on UE4 games for Radeon users I can not get ANY game to launch at 120hz on my 6700XT Some GeForce RTX 4070 Graphics cards May get a 16-Pin Power Connection. NVIDIA GeForce 531.29 WHQL driver Download & Discussion




Guru3D.com » News » Intel confirms large LGA1700 socket for Alder Lake In Developer Document

Intel confirms large LGA1700 socket for Alder Lake In Developer Document

by Hilbert Hagedoorn on: 06/29/2020 08:19 AM | source: momomo_us (Twitter) | 18 comment(s)
Intel confirms large LGA1700 socket for Alder Lake In Developer Document

Although not announced or done deliberately, some technical documentation popped up showing that Alder Lake, the processors following Rocket Lake, is indeed based on a Socket LGA 1700 de`sign, which means 1700 pins. 

It was Momomo who found and posted a screenshot that refers to a technical document, which indeed mentions 'LGA1700-ADL-S'. The document is grabbed from the Intel's site for developer tools with a toolkit for developing VR software. As you'll notice, mentioned in the name is LGA1700 and ADL-S. ADL-S could bring support for ddr5 and PCIe 5.0, albeit that is very much so, speculation. According to rumors, it will be 45 × 37.5mm in size , being a rectangle instead of the square that we are currently used to. This won't be the only big change Alder Lake-S would bring: Rumor has it that it will have a BIG.little design similar to that used in smartphone processors, with eight high-performance cores and eight high-efficiency cores.

Alder Lake-S could arrive at the end of next year, as part of the 12th generation of Intel Core processors.

 



Intel confirms large LGA1700 socket for Alder Lake In Developer Document




« Intel Rocket Lake 8-core processor surfaces in geekbench · Intel confirms large LGA1700 socket for Alder Lake In Developer Document · Microsoft is discontinuing physical stores and is fully focused on online »

Related Stories

Silicon Lottery: Roughly 20% of Intel Core i7-10700K achieve a stable 5.1GHz 24/7 - 06/10/2020 10:09 AM
We mentioned Silicon lottery quite a few times already if you are interested in purchasing a processor can guaranteed to reach a certain overclocking frequency, you can purchase them binned. Comet Lak...

Intel Core i9-10900K gets decapitated and cooled with Liquid Metal - 05/26/2020 09:20 AM
And I do mean the heatspreader was removed, delidded. Which is a gnarly job as that thing is soldered on there. ...

G.SKILL DDR4 Memory Reaches Extreme Speeds with 10th Gen Intel Core Processors - 05/21/2020 08:13 AM
With the latest release of the 10th Gen Intel Core processors and Intel Z490 chipset-based motherboards, G.SKILL demonstrates that DDR4 memory is capable of reaching a higher tier of extreme speed tha...

Dynabook Offers 10th Gen Intel Core vPro on Portégé X Series and Tecra A Notebooks - 05/15/2020 07:51 AM
Dynabook announced the availability of the new 10th Gen Intel Core Processors with vPro technology on the company's premium Portégé X Series (X30-G, X30L-G, X40-G & X50-G) and performance Tecra...

Spain Etailer starts listing retail prices 10th Intel Core. Generation processors - 05/13/2020 09:09 AM
Though the official prices have been announced by Intel, it's always wait and see what the retail prices will be like. One of the largest hardware stores in Spain, if not the largest, listed the pric...


4 pages 1 2 3 4


Fox2232
Senior Member



Posts: 11808
Joined: 2012-07-20

#5804392 Posted on: 06/30/2020 12:50 AM
There is a hard stop with physics at around 1-2nm, and even before that there are huge issues with heat density, which we see already both in TSMC 7nm, and Intel's 10nm. So unless huge fundamentals change, there is a stop.
48 cores are kind of useless for most tasks. I would argue that the best would be to start shipping with specialized hardware. It could be an I/O controller, like the new consoles have, or specific accelerators. I honestly cannot see any use for more than 32 cores on a desktop for the next five years, unless anything fundamental changes with software.
Heat density limits only clock. Because heat if function of both clock (cycles of 1/0 flops) and voltage to achieve such flop rate.
While we do not generally need that many cores on "desktops" as you wrote, we could have 64 Zen2 cores easily at something like 2,2~2.4GHz (on all cores) within 160~200W depending on workload type.

Sure, there is going to be hard limit. But from each density jump and improvement we see marketing like 10% higher clock at same power, or 30% higher power efficiency at same clock as before. (Which proves to be truthful.)
That's likely what will apply to N5 vs N7 from TSMC too.

We have seen AMD's claims about exceeding power efficiency targets. If we look at improvement from manufacturing process, there is likely another big saving from architecture optimizations. I would not be surprised if we see Zen3 8C/16T laptops outperforming Zen+ desktops at 1/4th~1/3rd of power draw.

Silva
Senior Member



Posts: 1968
Joined: 2013-06-04

#5804556 Posted on: 06/30/2020 03:42 PM
There is a hard stop with physics at around 1-2nm, and even before that there are huge issues with heat density, which we see already both in TSMC 7nm, and Intel's 10nm. So unless huge fundamentals change, there is a stop.
48 cores are kind of useless for most tasks. I would argue that the best would be to start shipping with specialized hardware. It could be an I/O controller, like the new consoles have, or specific accelerators. I honestly cannot see any use for more than 32 cores on a desktop for the next five years, unless anything fundamental changes with software.

Would you say you needed more than 4 cores 10 years ago? I think that way of thinking is flawed because generally it's the software that catches up with the hardware.
Most software for work already takes advantage of multiple threads and is improving every day. As for games, they're usually gimped by consoles that have been low thread count up until the next generation: both new consoles will have 16 threads and allot of GPU power, games will evolve fast in 2021 to take full advantage and by 2022 we will be needing PC's with those sweet 16 threads minimum to run the ports.

PrMinisterGR
Senior Member



Posts: 8099
Joined: 2014-09-27

#5804598 Posted on: 06/30/2020 05:33 PM
Heat density limits only clock. Because heat if function of both clock (cycles of 1/0 flops) and voltage to achieve such flop rate.
While we do not generally need that many cores on "desktops" as you wrote, we could have 64 Zen2 cores easily at something like 2,2~2.4GHz (on all cores) within 160~200W depending on workload type.
Sure, there is going to be hard limit. But from each density jump and improvement we see marketing like 10% higher clock at same power, or 30% higher power efficiency at same clock as before. (Which proves to be truthful.)
That's likely what will apply to N5 vs N7 from TSMC too.
We have seen AMD's claims about exceeding power efficiency targets. If we look at improvement from manufacturing process, there is likely another big saving from architecture optimizations. I would not be surprised if we see Zen3 8C/16T laptops outperforming Zen+ desktops at 1/4th~1/3rd of power draw.
This is true. There is also the fact that we can be stuck at a nominal 3nm, but then more and more of the layers of the chips still being made in the nominal 3nm node, get the actual 3nm treatment. Do you really see more than a 10x in pure performance the way it's going? I'm not so optimistic. But you have good points.

Would you say you needed more than 4 cores 10 years ago? I think that way of thinking is flawed because generally it's the software that catches up with the hardware.
Most software for work already takes advantage of multiple threads and is improving every day. As for games, they're usually gimped by consoles that have been low thread count up until the next generation: both new consoles will have 16 threads and allot of GPU power, games will evolve fast in 2021 to take full advantage and by 2022 we will be needing PC's with those sweet 16 threads minimum to run the ports.
Technically, you would still be fine with a quadcore with perfect hyperthreading. A lot of problems can be paralellized, but in the end, not all. In fact, most cannot and that's a hard mathematical stop also. We can expand to things like physics and audio, but in those cases specially-designed ASICs are much better than general purpose CPUs. That's for most problems. AI is the same, a matrix accelerator makes much more sense than a dedicated CPU.

AI/Audio/Graphics/IO, are all done better with specialized hardware, so really what's the point of the beyond 32 core of "real" CPUs, at least for the forseeable decade? The pace of change is bound to slow down even more, and I still have a CPU from eight years ago that can do most things fairly competently. Can you really see a 32-Core Zen 3, realistically running out of juice before the next decade is out? Unless accelerators become more common, I cannot.

Fox2232
Senior Member



Posts: 11808
Joined: 2012-07-20

#5804612 Posted on: 06/30/2020 06:06 PM
This is true. There is also the fact that we can be stuck at a nominal 3nm, but then more and more of the layers of the chips still being made in the nominal 3nm node, get the actual 3nm treatment. Do you really see more than a 10x in pure performance the way it's going? I'm not so optimistic. But you have good points.

That's really hard question to answer. I remember having Athlon XP 2200+ at some 1800MHz eating 65W. That was just one core at comparably low clock.
Today we have 1C/2T at above 4GHz eating less than 10W. (And difference in transistor count is just...)
When I look at Ryzen 7 2700X, My highest stable OC at reasonable voltage ate 220W under full load. Yet, Ryzen 9 3900X has 50% more cores, 80% higher performance at around 1/2 of power draw.
I can't predict achievable clock of next node, nor voltage and therefore no assumptions about power efficiency. Just experience and hope that technology can move further.

If we were talking about intel, I would just ask: "Is this their new architecture? If not, just don't expect meaningful improvements."
But AMD does improve each Zen generation in considerable way. And while one can call it bold experimenting, it is progress and they do learn from it.
I am not even sure if Chiplet approach can survive to Zen4. Maybe it will be used for 16C/32T and maybe, it will all inside one CCD with IMC and no CCX to be found.
If there was actual leak on what's planed for Zen3 and 4 as biggest sources of improvements, we could be as surprised as when Chiplets and I/O die got confirmed.
Or when we realized that there is space for 2nd CCD under IHS of Zen2.

PrMinisterGR
Senior Member



Posts: 8099
Joined: 2014-09-27

#5804857 Posted on: 07/01/2020 12:39 PM
That's really hard question to answer. I remember having Athlon XP 2200+ at some 1800MHz eating 65W. That was just one core at comparably low clock.
Today we have 1C/2T at above 4GHz eating less than 10W. (And difference in transistor count is just...)
When I look at Ryzen 7 2700X, My highest stable OC at reasonable voltage ate 220W under full load. Yet, Ryzen 9 3900X has 50% more cores, 80% higher performance at around 1/2 of power draw.
I can't predict achievable clock of next node, nor voltage and therefore no assumptions about power efficiency. Just experience and hope that technology can move further.

I was more talking about the real hard stop, 0.2nm. Intel already has issues with deviations of one atom, and that's at 14nm.

Listen to this video at 5:42



If we were talking about intel, I would just ask: "Is this their new architecture? If not, just don't expect meaningful improvements."
But AMD does improve each Zen generation in considerable way. And while one can call it bold experimenting, it is progress and they do learn from it.
I am not even sure if Chiplet approach can survive to Zen4. Maybe it will be used for 16C/32T and maybe, it will all inside one CCD with IMC and no CCX to be found.
If there was actual leak on what's planed for Zen3 and 4 as biggest sources of improvements, we could be as surprised as when Chiplets and I/O die got confirmed.
Or when we realized that there is space for 2nd CCD under IHS of Zen2.
I would argue that there is no going back from chiplets. They make things way too convenient to be abandoned. EPYC already can combine external accelerators using them, and so does Xeon.
If anything a 16 core CPU with dedicated audio, physics, ai, network and IO chiplets, would be a much better purchase than a 32-core one.

4 pages 1 2 3 4


Post New Comment
Click here to post a comment for this news story on the message forum.


Guru3D.com © 2023