Intel preps 16-core Xeon D-1571 processor at 45 Watt

Published by

Click here to post a comment for Intel preps 16-core Xeon D-1571 processor at 45 Watt on our message forum
https://forums.guru3d.com/data/avatars/m/175/175902.jpg
45watt, dual chanel so it is low cost motherboard (for pro of course lol), it start to make shadow to Atom config for files server and micro server...
https://forums.guru3d.com/data/avatars/m/156/156133.jpg
Moderator
...Next CPU in Xbox two....:D Joking of course. It's cpu's like this that make you wish games were more multithreaded.
https://forums.guru3d.com/data/avatars/m/198/198862.jpg
It's cpu's like this that make you wish games were more multithreaded.
Indeed. This thing will be destroyed by a simple 6600k when it comes to games.
data/avatar/default/avatar39.webp
...Next CPU in Xbox two....:D Joking of course. It's cpu's like this that make you wish games were more multithreaded.
Indeed. This thing will be destroyed by a simple 6600k when it comes to games.
Yes, single threaded performance is king for now for most everything. However, some games and other applications would do better with more threads. Massive RTS games for example. Now that is something that could be made to use a ton of threads. Most games cannot be more threaded than they currently are.. or at least it would not be worth the trouble for the minuscule amount of performance gained in certain situations. If you are playing a game and don't have one core that is pegged out for usage, then more threading will probably not help. And even then, sometimes that thread that is pegging out usage has to run linearly. the only other option is that the game was coded poorly and the pegged out thread is just wasting CPU cycles.
https://forums.guru3d.com/data/avatars/m/115/115616.jpg
It's not only about "poor coding". Achieving good multi-core performance is especially hard in multiplayer games. Basically, for 60fps game you have 16ms to prepare entire frame. 120Hz - 8ms. It's tough. You have to calculate all the physics, game logic, network code, audio, and graphics in so short time. Work partitioning, distribution, asynchronous computation, gathering results and composition have to complete in this tiny fraction of second. Of course, some of the computations have value for longer time than 1 frame, eg. some AI, pathing, and general forecasts, but paired with network synchronization, it's super-challenging. On top of that, synchronization between cores (making one core aware that other has finished and flushing the data) takes a lot of time. Distribution of non-time-critical workload is doable and while not trivial, it's well-described. I wouldn't expect distributing of tasks like engineering computations or rendering across many cores and machines to be that difficult for an experienced software developer. But loading 8+ cores effectively with real-time flow of multiple data streams that depend on each other, is exceptionally tough. This makes me think that: a) Games will be optimized for max 4 cores for quite long time b) Games will be capped to 60 or even 30 fps to give more time to calculate the frame
data/avatar/default/avatar39.webp
Given that my gaming laptop has an Intel Core i7 with a TDP of 45 watts, this 16-core processor, apparently, could actually be put in a laptop! That, of course, would be very impressive, even if 1.3 GHz isn't as fast as some chips with fewer cores can go.
https://forums.guru3d.com/data/avatars/m/52/52796.jpg
...Next CPU in Xbox two....:D Joking of course. It's cpu's like this that make you wish games were more multithreaded.
Hah. Makes me think of when everyone was buying Opertons back in the day, less due to core count and more for CPU type that is.
data/avatar/default/avatar25.webp
16 x 1.3GHz?
https://forums.guru3d.com/data/avatars/m/142/142454.jpg
This will make an awesome low power VM server chip. 45W, wow.
https://forums.guru3d.com/data/avatars/m/156/156133.jpg
Moderator
There was a quad core 15w Xeon released not too log ago as well...:)
https://forums.guru3d.com/data/avatars/m/265/265352.jpg
hell yeahh..another low power procie from BLUE side..
https://forums.guru3d.com/data/avatars/m/251/251394.jpg
It's not only about "poor coding". Achieving good multi-core performance is especially hard in multiplayer games. Basically, for 60fps game you have 16ms to prepare entire frame. 120Hz - 8ms. It's tough. You have to calculate all the physics, game logic, network code, audio, and graphics in so short time. Work partitioning, distribution, asynchronous computation, gathering results and composition have to complete in this tiny fraction of second. Of course, some of the computations have value for longer time than 1 frame, eg. some AI, pathing, and general forecasts, but paired with network synchronization, it's super-challenging. On top of that, synchronization between cores (making one core aware that other has finished and flushing the data) takes a lot of time. Distribution of non-time-critical workload is doable and while not trivial, it's well-described. I wouldn't expect distributing of tasks like engineering computations or rendering across many cores and machines to be that difficult for an experienced software developer. But loading 8+ cores effectively with real-time flow of multiple data streams that depend on each other, is exceptionally tough. This makes me think that: a) Games will be optimized for max 4 cores for quite long time b) Games will be capped to 60 or even 30 fps to give more time to calculate the frame
So this leaves us with only one way to improve which is better input-per-core? New architectures can solve all those latency issues you're talking about. It's just the devs that code poorly and only 1 core is used and others are at rest.
https://forums.guru3d.com/data/avatars/m/142/142454.jpg
So this leaves us with only one way to improve which is better input-per-core? New architectures can solve all those latency issues you're talking about. It's just the devs that code poorly and only 1 core is used and others are at rest.
It's not that easy. Architecture changes don't automatically solve multi-threading design issues. It's neither easy or always possible to actually use all the threads available in a system, certain applications lend themselves better to multi-threading than others. Anyway, how many games are actually CPU bound these days? Not a lot. If you're running >= 1080p, any reasonable quad core from the few years is sufficient.
https://forums.guru3d.com/data/avatars/m/227/227853.jpg
It's not that easy. Architecture changes don't automatically solve multi-threading design issues. It's neither easy or always possible to actually use all the threads available in a system, certain applications lend themselves better to multi-threading than others. Anyway, how many games are actually CPU bound these days? Not a lot. If you're running >= 1080p, any reasonable quad core from the few years is sufficient.
Quite a number of games in fact, especially games where there's a lot of stuff happening on the screen aka a lot of units NPCs etc. See below.
Yes, single threaded performance is king for now for most everything. However, some games and other applications would do better with more threads. Massive RTS games for example. Now that is something that could be made to use a ton of threads. Most games cannot be more threaded than they currently are.. or at least it would not be worth the trouble for the minuscule amount of performance gained in certain situations. If you are playing a game and don't have one core that is pegged out for usage, then more threading will probably not help. And even then, sometimes that thread that is pegging out usage has to run linearly. the only other option is that the game was coded poorly and the pegged out thread is just wasting CPU cycles.
Not with DX11. Massive RTS and MMO games suffer the most because of the large amount of draw calls present. Considering that in DX11 communication between the CPU and GPU is done on 1 thread, draw calls are limited by design. This is where DX12 and Vulkan will come in. I'm sick and tired of SC2 having deplorable FPS every time there's a huge fight on the screen. Especially since I play a lot of 3v3 and 4v4 arcade games. I still have a screenshot on my desktop, look at this: http://s9.postimg.org/6l6564fj3/fuuuuck.png 2 FPS. I hope Blizzard will add Vulkan or DX12 support.
https://forums.guru3d.com/data/avatars/m/115/115616.jpg
So this leaves us with only one way to improve which is better input-per-core? New architectures can solve all those latency issues you're talking about. It's just the devs that code poorly and only 1 core is used and others are at rest.
A single core being used should be considered a defect today. Loading 4 cores to 50%+ average should be a standard in games. But getting to 16 or 32 cores loaded at 80%+ in a game is borderline impossible now, at least with high frame rate. Decreasing sync latency will help a lot. Even now, it's possible to benefit from distributing computations across cores, but it's much more demanding to do that than to assign each core a dedicated task. It's done today, as it's easier - you don't need that good devs to achieve it. And it makes me angry that game like Diablo 3 can slow down on a fast computer, while the graphics card is barely loaded, and all cores but 1 or 2 are idling. On the other hand, such CPUs are great for servers, where each core can work for a different user, with close to no dependencies between them and much greater delay tolerance. I guess that it will change once low-speed 8/16 core CPUs become a new default for low/mid-end computers and there will be no other way to scale speed than to learn how to distribute the computations.
data/avatar/default/avatar32.webp
And we still don't have mobile hex-cores for whatever reason...
https://forums.guru3d.com/data/avatars/m/254/254132.jpg
And we still don't have mobile hex-cores for whatever reason...
That's all we need. Up to 10 now but for 3 different states...