Intel Lunar lake Core Ultra 5 234V CPU Spotted: 8-Core Configuration and Battlemage iGPU

Published by

Click here to post a comment for Intel Lunar lake Core Ultra 5 234V CPU Spotted: 8-Core Configuration and Battlemage iGPU on our message forum
data/avatar/default/avatar29.webp
8C/8T. Looks like they've definitely dropped hyperthreading for P cores, as rumoured. The cynic in me assumes they're doing this to leave performance on the table for the next marketing "generation". The back end of a P core is pretty wide and no HT will leave it under-utilised in parallel workloads... unless Intel have come out with some other way around this
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
stampedeadam:

8C/8T. Looks like they've definitely dropped hyperthreading for P cores, as rumoured. The cynic in me assumes they're doing this to leave performance on the table for the next marketing "generation". The back end of a P core is pretty wide and no HT will leave it under-utilised in parallel workloads... unless Intel have come out with some other way around this
I highly doubt Intel is just simply removing HT and calling it a day. Some of the architecture was very much designed around its existence, so I'm sure the die as a whole will be notably smaller and probably handle higher clock speeds at a lower wattage.
https://forums.guru3d.com/data/avatars/m/196/196426.jpg
With the ability to cram so many cores into a small die, SMT is a relic from the past, from the age where cores were huge and you could only have one or two in a CPU. I wouldn't be surprised if AMD removes it as well when they can make tiny core dies with 16 or 32 cores on each and glue 2 to 12 of them together on the same substrate... Compilers today have gotten so good that they can use a very high percentage of a core resources, leaving little to none left for SMT. Context switching probably uses more power than just leaving the little unused parts idle and just dedicating another of the bajillion cores to a thread.
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
wavetrex:

With the ability to cram so many cores into a small die, SMT is a relic from the past, from the age where cores were huge and you could only have one or two in a CPU. I wouldn't be surprised if AMD removes it as well when they can make tiny core dies with 16 or 32 cores on each and glue 2 to 12 of them together on the same substrate... Compilers today have gotten so good that they can use a very high percentage of a core resources, leaving little to none left for SMT. Context switching probably uses more power than just leaving the little unused parts idle and just dedicating another of the bajillion cores to a thread.
Intel will have a node advantage and they can afford to go this route as they control (for the most part) their own silicon. AMD has to go the most efficient route and has developed technology to use different nodes (to suit performance needs). another point is AMD is well familiar with small(er) cores, plus they have an intimate relationship with TSMC, the world leader in ARM fab. their decision to keep SMT is based on the market and market expectations. to be perfectly honest, until Intel shows the performance per watt improvements of a node shrink (and applies them to power demand instead of just a speed increase) i'm not terribly interested except for their technical progression. imho, the AMD scheme of "e" cores is the right one as they're real honest-to-goodness cores and there's no extra reliance on the scheduler
https://forums.guru3d.com/data/avatars/m/216/216349.jpg
I don`t understand what`s so special about this new CPU, it looks more of the same to me.
https://forums.guru3d.com/data/avatars/m/266/266726.jpg
wavetrex:

With the ability to cram so many cores into a small die, SMT is a relic from the past, from the age where cores were huge and you could only have one or two in a CPU. I wouldn't be surprised if AMD removes it as well when they can make tiny core dies with 16 or 32 cores on each and glue 2 to 12 of them together on the same substrate... Compilers today have gotten so good that they can use a very high percentage of a core resources, leaving little to none left for SMT. Context switching probably uses more power than just leaving the little unused parts idle and just dedicating another of the bajillion cores to a thread.
smt is still very good , its a very cheap way to get better core utilization on chips with longer execution pipelines, IBM's current power chips have up to 8threads per core for instance , I think the a big reason intel is moving away from smt is power consumption and heat dissipation, on upcoming nodes the density is wicked high (>200mtr/mm^2 pretty soon), so keeping more of your core active at any given time can end up hurting your performance due to hotspots. smt can offer significant uplifts ,>20%, but if you have to clock down to use it , it ends up not yielding as much, if you look at the current chips, the ryzen 7000 and the intel alderlake/raptorlake skus, the top end skus are already basically uncoolable via normal means, if you let them draw as much power as they want, they will pretty much always hit their tjmax, even on mid range AIOs, and that's with "only" 60-140mtr/mm^2, the problem will get significantly worse with each new generation of products, especially if they keep pushing the clockspeeds up. aka smt doesn't really belong on a high frequency core when you have smaller lower clocked cores doing the bulk of the multithreading anyway, in a thermally limited situation.
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
Removing smt seems so strange, on the other hand so they did with the 9th gen i7 9700 for segmentation purposes just bring it back in the 10700 , are they so confident they leave it out for the next gen ? Is their new architecture not needing it ? Is it there but is a huge security risk so they disable it ? Is their thread director improved and it is just plain better to ditch the load on an encore or low power e core ? We can't know no matter the reasoning behind the lack of smt it changes nothing price /performance and consumption will tell us if they are good or bad ! Can't wait to see the next CPU battle of the upcoming gen !
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Venix:

Removing smt seems so strange, on the other hand so they did with the 9th gen i7 9700 for segmentation purposes just bring it back in the 10700 , are they so confident they leave it out for the next gen ? Is their new architecture not needing it ? Is it there but is a huge security risk so they disable it ? Is their thread director improved and it is just plain better to ditch the load on an encore or low power e core ? We can't know no matter the reasoning behind the lack of smt it changes nothing price /performance and consumption will tell us if they are good or bad ! Can't wait to see the next CPU battle of the upcoming gen !
I don't see them bringing it back. In most applications, HT just isn't as good as it used to be since it was so crippled by security mitigations. As pointed out by @user1, HT seems to be holding back Intel from attaining/maintaining peak performance due to heat. I haven't seen any performance-per-watt tests with HT on vs off in a while but I get the impression with a fully-mitigated OS and chip, HT isn't doing enough to improve efficiency. Ironically, Intel could just simply not destroy their efficiency by pushing for such high clock speeds, but, seems like that's still the only thing giving them an edge over AMD. As for price, yeah, we the consumer likely won't see the benefit, but Intel will. HT may have existed for a while but engineering costs for it aren't still free. They also might be able to get away with smaller caches, which means a smaller die, which means either more cores per die or a lower cost per die.
https://forums.guru3d.com/data/avatars/m/304/304226.jpg
Good friends, Perhaps an uneducated idea, but can't they make something like a palm-sized cpu, so it is easier to cool it? Or is it necessary to keep everything compact and close together for best performance?
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
Horse Hooves Clomping:

Good friends, Perhaps an uneducated idea, but can't they make something like a palm-sized cpu, so it is easier to cool it? Or is it necessary to keep everything compact and close together for best performance?
well yes they can with tiles now it might be even viable .... but as a monolithic die .... it will be insanely expensive a 30inch x 30 inch wafer costs what now 16k usd ....so a palm sized cpi would be what 10 cpus per wafer ? and in a non extremely mature node there is a chance with defects you will get out of it ...0 fully working chips ...while if you go on a chiplet approach or tiles you prin in there .....300 chips ...and you will get maybe like 30 defective completely non working ...20 partially defective ...and 250 fully working chips .(random numbers but you get the gist of it i hope )
data/avatar/default/avatar21.webp
tunejunky:

To be perfectly honest, until Intel shows the performance per watt improvements of a node shrink (and applies them to power demand instead of just a speed increase) i'm not terribly interested except for their technical progression. imho, the AMD scheme of "e" cores is the right one as they're real honest-to-goodness cores and there's no extra reliance on the scheduler
I agree that Intel's efficiency core strategy will show whether it has real teeth or is simply smoke and mirrors with the upcoming Skymont cores, which will be used for Arrow Lake. If the IPC of Skymont is appreciably increased to say around Zen 3 or Ice Lake levels plus a massive performance per watt and performance per mm2 increase from the node shrink, then the efficiency core strategy will conceivably be a massive threat to AMD. Looking at the evolution of the "mont" cores, there is a much greater improvement in IPC and performance gen on gen compared to what we see with the big cores. I used to be anti-efficiency cores when Intel first announced them with Alder Lake, but I can see how useful they are for increasing multithreaded performance, and even gaming performance with the Windows scheduling improvements in combination with the Thread Director.
data/avatar/default/avatar12.webp
schmidtbag:

I don't see them bringing it back. In most applications, HT just isn't as good as it used to be since it was so crippled by security mitigations. As pointed out by @user1, HT seems to be holding back Intel from attaining/maintaining peak performance due to heat. I haven't seen any performance-per-watt tests with HT on vs off in a while but I get the impression with a fully-mitigated OS and chip, HT isn't doing enough to improve efficiency. Ironically, Intel could just simply not destroy their efficiency by pushing for such high clock speeds, but, seems like that's still the only thing giving them an edge over AMD. As for price, yeah, we the consumer likely won't see the benefit, but Intel will. HT may have existed for a while but engineering costs for it aren't still free. They also might be able to get away with smaller caches, which means a smaller die, which means either more cores per die or a lower cost per die.
I remember doing HT on vs HT off tests on previous Intel architectures in transcoding, from Sandy Bridge-E to Broadwell-E, and the performance increase was always double digit; usually around 18 to 20%. But in the efficiency core era, the performance increase for HT has decreased to around 6 to 8% at most when using Handbrake, which means the efficiency cores are already sucking up most of the parallelism in these applications, which makes HT more of a liability than an advantage.....not to mention the security vulnerabilities.