Intel processors: Comet Lake and Elkhart Lake in 2020 (roadmap)

Published by

Click here to post a comment for Intel processors: Comet Lake and Elkhart Lake in 2020 (roadmap) on our message forum
https://forums.guru3d.com/data/avatars/m/274/274779.jpg
sverek:

14nm in 2020... Jesus Christ, Intel.
Horrible.
data/avatar/default/avatar27.webp
DeskStar:

I sure will have fun with the most powerful CPU at the given time. Power need not be a constraint when wanting performance. Just as long as its allowed to be tapped into. Hence I personally haven't jumped onto AMD's bandwagon yet. Their offerings aren't up to my stuff yet when Intel offers more performance to be unlocked just at the cost of better cooling.... I'll take that....even if it is a bit more money in the beginning. Longevity to me wins over 40+ Watts of load difference in a huge performance delta to begin with. I have 47 fans in my system and its more quiet than a person whispering something to you three feet away. I guess it's all in what you either spend your money on and or what you perceive to be factual. Because noise is easily relegated by making sure you do the right things.....like buying the right fans. The cost of my fans in my system costs more than some people actual SYSTEM entirely, so i say again I guess it's all in how you "want" as opposed to how you just do.
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ If i ever needed that much cooling i would hook up the pc to my house water heater, for a practically endless supply of water for the watercooling loop. This example of a monster pc is exactly what AMD wants to replace, a 32 core system or 64 coming, with 4 quatros or instinct cards, would be faster then this monster in professional workloads. If you have 4 Titans i hope you do professional workloads, because the game support would be very hit or miss.
data/avatar/default/avatar17.webp
Silva:

AMD GPU division is far from Nvidia, but if they apply the same technical expertise on their future GPUs (after Navi, maybe), they might catch Nvidia (if they're not planing something multi chip already).
That approach doesn't work with GPUs so easily. GPUs may have thousands of cores, but they need to work very closely together so that splitting them into different chips would present huge bottlenecks. Thats what you see with CrossFire or SLI. If they can't figure out how to do that automatically without the application knowing, it won't work. And thats a huge problem for them right now.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
nevcairiel:

Thats what you see with CrossFire or SLI. If they can't figure out how to do that automatically without the application knowing, it won't work. And thats a huge problem for them right now.
I see why you're saying that but I don't think it's as bad as you might think. There are a slew of problems why Xfire and SLI didn't work, such as: * Consistency: Both GPUs needed to be as similar as possible, which was a problem with so many AIB partners would make their own adjustments to the hardware. This becomes an even greater problem when one GPU is overheating because the other one is sitting just below it. * Bandwidth: there's just too much data to communicate between the GPUs over PCIe or the SLI link. This is why NVLink is basically a whole discrete set of PCIe lanes. * Latency: there's a lot of wasted time synchronizing the GPUs * Software: it was just too cumbersome to set up. Although you could force-enable multi-GPU setups and yield overall positive results, this required some manual tweaking, which most people didn't know how to do properly. Back in the days of GPUs like the R9 290x2 or the GTX 690, that was a step in the right direction but it wasn't good enough, because it was basically just 2 separate GPUs with separate VRAM slapped to the same AIB. But, I believe GPUs could be designed with something like InfinityFabric in mind, where you have a central hub that does all the syncing and relays data between each GPU die. There's no "master/primary" GPU, the memory isn't split between each die, and software should be able to handle it as though it was just 1 giant monolithic GPU. TL;DR: what AMD did with Ryzen wasn't just (in the words of Intel) gluing a couple dies together and calling it a day. There's a backbone that moderates everything so it runs seamlessly regardless of which core(s) you're using. The same principle can be used with GPUs.
https://forums.guru3d.com/data/avatars/m/232/232349.jpg
TLD LARS:

------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ If i ever needed that much cooling i would hook up the pc to my house water heater, for a practically endless supply of water for the watercooling loop. This example of a monster pc is exactly what AMD wants to replace, a 32 core system or 64 coming, with 4 quatros or instinct cards, would be faster then this monster in professional workloads. If you have 4 Titans i hope you do professional workloads, because the game support would be very hit or miss.
to many filters would be needed to do such a thing and I wouldn't want to restrict flow to the rest of the house... HA!!! Not to mention one needs biocides and stabilizers in their loop in order to attain longevity.
https://forums.guru3d.com/data/avatars/m/232/232349.jpg
Koniakki:

/offtopic GEEZUS!!! And here I thought I had too many fans in my TT X9 but then I looked and saw you have a CaseLabs STH10. Ok, 47 fans make sense now! That case is pure pr0n!
Yes sir...thank you sir.... (old Gregg) And she's got the pedestal to go with the rest of her for that extra room for them rads. Case of cases if you ask me. Anyone could help me out with locating a tempered window for this beast I'd be more than greatly appreciative that's for sure....
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
nevcairiel:

That approach doesn't work with GPUs so easily. GPUs may have thousands of cores, but they need to work very closely together so that splitting them into different chips would present huge bottlenecks. Thats what you see with CrossFire or SLI. If they can't figure out how to do that automatically without the application knowing, it won't work. And thats a huge problem for them right now.
schmidtbag:

I see why you're saying that but I don't think it's as bad as you might think. There are a slew of problems why Xfire and SLI didn't work, such as: * Consistency: Both GPUs needed to be as similar as possible, which was a problem with so many AIB partners would make their own adjustments to the hardware. This becomes an even greater problem when one GPU is overheating because the other one is sitting just below it. * Bandwidth: there's just too much data to communicate between the GPUs over PCIe or the SLI link. This is why NVLink is basically a whole discrete set of PCIe lanes. * Latency: there's a lot of wasted time synchronizing the GPUs * Software: it was just too cumbersome to set up. Although you could force-enable multi-GPU setups and yield overall positive results, this required some manual tweaking, which most people didn't know how to do properly. Back in the days of GPUs like the R9 290x2 or the GTX 690, that was a step in the right direction but it wasn't good enough, because it was basically just 2 separate GPUs with separate VRAM slapped to the same AIB. But, I believe GPUs could be designed with something like InfinityFabric in mind, where you have a central hub that does all the syncing and relays data between each GPU die. There's no "master/primary" GPU, the memory isn't split between each die, and software should be able to handle it as though it was just 1 giant monolithic GPU. TL;DR: what AMD did with Ryzen wasn't just (in the words of Intel) gluing a couple dies together and calling it a day. There's a backbone that moderates everything so it runs seamlessly regardless of which core(s) you're using. The same principle can be used with GPUs.
Crossfire/SLI work completely differently than any proposed MCM-GPU setup. Crossfire/SLI is dying because the majority of modern shaders use interframe dependencies to speed up the processing - which creates massive overhead and scheduling/synchronization issues in multi-GPU setups. As far as MCM setups - it's being worked on by both companies. Nvidia and others already published several research documents related to it: https://research.nvidia.com/sites/default/files/publications/ISCA_2017_MCMGPU.pdf https://hps.ece.utexas.edu/people/ebrahimi/pub/milic_micro17.pdf
data/avatar/default/avatar12.webp
I have 47 fans is the "Iam Vegan" of IT industry 😀 :D
https://forums.guru3d.com/data/avatars/m/266/266726.jpg
Jespi:

Tbh i don't understand people crying about 14nm. It's just a number for frack sake, the more important is power which these monsters clearly have. You keep whining that intel is using his 14nm++++++++++++++++++++++++++++++++++++++++++++++++++++ architecture, so what? Do you got 60% lower power than AMD? No? Is its TDP 300W? No is max GHZ stuck at 3,5GHZ? no..so where the frack is problem. (Oh and i own Ryzen 2700, before you tell me am brainwashed intel fan)
this is why https://www.servethehome.com/intel-xeon-platinum-9200-formerly-cascade-lake-ap-launched/ intel just launched a 400W tdp server chip, presumably to try and compete with epyc 2. 14nm means increasing tdp's inorder to try and keep performance up against what ever amd is going to launch,14nm is a very old node at this point. I wouldn't be surprised if intel launches desktop parts in the >200w tdp range, over the next year, since 10nm is MIA
https://forums.guru3d.com/data/avatars/m/197/197287.jpg
DeskStar:

I sure will have fun with the most powerful CPU at the given time. Power need not be a constraint when wanting performance. Just as long as its allowed to be tapped into. Hence I personally haven't jumped onto AMD's bandwagon yet. Their offerings aren't up to my stuff yet when Intel offers more performance to be unlocked just at the cost of better cooling.... I'll take that....even if it is a bit more money in the beginning. Longevity to me wins over 40+ Watts of load difference in a huge performance delta to begin with. I have 47 fans in my system and its more quiet than a person whispering something to you three feet away. I guess it's all in what you either spend your money on and or what you perceive to be factual. Because noise is easily relegated by making sure you do the right things.....like buying the right fans. The cost of my fans in my system costs more than some people actual SYSTEM entirely, so i say again I guess it's all in how you "want" as opposed to how you just do.
....lol It's posts like this when i wish these forums had a downvoting system, there's so much facepalm here. I mean, sure, you're free to do as you want, but there's no situation...ever, that 47 fans would be needed in a system. You could have a dual-socket 250 watt per CPU with 3 titans, and still not need 47 fans....no matter how slow, no matter how fast. Plus, 47 fans, in any configuration, due to inherently mish-mash airflow you'd have to have, would have audible sound no matter what. Then, there's the validity of your statement. How, pray tell, do you have 47 fans in a PC? The amount of space 47 fans would take would mean you have no case, instead, your case is just one conglomerate of fans. I'm having a hard time even finding a case with more then 10 fan mounts, there are some, but, lets say you got one with 10 fan mounts, and you got 4 GPUs with 3 fans each, and two CPUs with 2 fans each, that gives you....26 fans, where's the other 21 fans? Even if you bumped the case up to 20 fan mounts(like the Thermaltake Core X9 listed above, or is it 23 fan mount capable on that one? i'm reading 20 according to the description, but it could be 23), you'd still be 16 fans short (or 13 if it is indeed 23 on the Thermaltake Core X9) But i guess i can't rule out the possibility that what you're saying is true, as this guy has 66 fans https://www.wired.com/images_blogs/gadgetlab/fan_pc_3.png Very um.....practical.....and useful.....not overkill for the sake of overkill at all, definitely not useless And this isn't even mentioning that depending on your wattage that your fans are running at, you're using even more wattage, which again i get you say you don't care about, but that's another 47-94 extra watts right there, supposedly, to cool things down...... Hey guys, look at my car! most cars have 4 wheels, but mine.....it's got 8! https://s1.cdn.autoevolution.com/images/news/gallery/this-toyota-has-way-too-many-wheels_4.jpg Or better yet, look at my bicycle! https://farm1.staticflickr.com/65/196422054_6f7018270e_b.jpg OH YEAH BABY LOOK AT ME!
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
I have no idea why people have something against fantastic cases.
data/avatar/default/avatar07.webp
Silva:

Also, making a bigger chip (supposed to have 10 cores) on the same 14nm node will render worse yields over the last generation (due to bigger die size, space on waffer and math) witch in turn will drive cost of those chips high. If you think Intel is expensive now, wait for the next generation. New nodes are getting more and more expensive, and the only way of getting good yields is by making small chips. Preferably make small chips with the important stuff and the I/O on older efficient nodes. Wait, isn't that what AMD is proposing with Zen 2? Ya, Intel is the dinosaur in the room. AMD GPU division is far from Nvidia, but if they apply the same technical expertise on their future GPUs (after Navi, maybe), they might catch Nvidia (if they're not planing something multi chip already).
many soc/chips still using bigger node yeah it yields lower amount per-waffer, but other than that its should not make the pricing different or higher on previous/older nodes, as the production mature, yield-rate on older node actually better than new node on other hand, like u already mention, new-nodes means they need to investing in making the production line even they have better yields but that when its matures, on early stages the yield will be low so that what always make new-nodes have higher prices so saying 14nm will be more expensive than 10nm is somewhat not correct
data/avatar/default/avatar06.webp
schmidtbag:

There's a backbone that moderates everything so it runs seamlessly regardless of which core(s) you're using. The same principle can be used with GPUs.
If you abstract a principle to a high enough level, you can use it for anything. But the details to make it work are exponentially more complex with a GPU, since the interaction between cores is far higher there.
data/avatar/default/avatar32.webp
Denial:

Crossfire/SLI is dying because the majority of modern shaders use interframe dependencies to speed up the processing - which creates massive overhead and scheduling/synchronization issues in multi-GPU setups. As far as MCM setups - it's being worked on by both companies. Nvidia and others already published several research documents related to it: https://research.nvidia.com/sites/default/files/publications/ISCA_2017_MCMGPU.pdf https://hps.ece.utexas.edu/people/ebrahimi/pub/milic_micro17.pdf
I would be surprised if they weren't working on it, making several small chips is always better then one big one. However, that doesn't mean they are anywhere close to making it work seamlessly. Because bottlenecks from overhead to keep multiple chips synchronized and talking to each other are quite real.
https://forums.guru3d.com/data/avatars/m/250/250418.jpg
nevcairiel:

That approach doesn't work with GPUs so easily. GPUs may have thousands of cores, but they need to work very closely together so that splitting them into different chips would present huge bottlenecks. Thats what you see with CrossFire or SLI. If they can't figure out how to do that automatically without the application knowing, it won't work. And thats a huge problem for them right now.
The problem with CrossFire and SLI is on the software: it has to be the developer to program for the feature. Some games scale really well, up to 90%, while others just run worse. It's not an hardware related issue. If the "Crossfire" is done internally at the GPU level, you could split the load (like the software already does) in the board but only show one GPU to the software. Eventually we will have multi die GPUs, Nvidia simply can't keep making them bigger as not everyone has deep pockets.
slyphnier:

many soc/chips still using bigger node yeah it yields lower amount per-waffer, but other than that its should not make the pricing different or higher on previous/older nodes, as the production mature, yield-rate on older node actually better than new node on other hand, like u already mention, new-nodes means they need to investing in making the production line even they have better yields but that when its matures, on early stages the yield will be low so that what always make new-nodes have higher prices so saying 14nm will be more expensive than 10nm is somewhat not correct
When top of the line performance isn't in question, old nodes work just fine. They're efficient to produce and cheap to sell. You don't understand how yields function, do you? Using the same process node, if you make a bigger chip you will have lower yields. The chip takes more space on the wafer, has bigger change of being a faulty one. Plus, if the chip takes more space on the wafer, you will have less working chips from that wafer to divide the cost, making the product more expensive (even if the yields are +90%). Bigger chips will always be more expensive, even if fabricated on a mature fabrication process, its math. This is one of the reasons Nvidia is so expensive, they're not getting better, they're just getting bigger.
data/avatar/default/avatar02.webp
Silva:

If the "Crossfire" is done internally at the GPU level, you could split the load (like the software already does) in the board but only show one GPU to the software. Eventually we will have multi die GPUs, Nvidia simply can't keep making them bigger as not everyone has deep pockets.
The point is that this isn't very easy. Crossfire doesn't only have scaling problems because software sucks, but also because more advanced graphic features are just fundamentally incompatible with its approach. If you could just make a chip that does all the Crossfire/SLI work, and it would scale 90-100% all the time, then someone would have done that already. But there is no easy solution to this. A future multi-die GPU wouldn't work like Crossfire/SLI works today (no matter if management is in software or hardware). It would have to be much smarter then that, and allow the chips to work together much more closely, sharing caches, VRAM and all that stuff. This is an exceptionally hard topic to solve - and most importantly, much harder then for CPUs, because CPU cores are designed to work relatively independently.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
nevcairiel:

The point is that this isn't very easy. Crossfire doesn't only have scaling problems because software sucks, but also because more advanced graphic features are just fundamentally incompatible with its approach. If you could just make a chip that does all the Crossfire/SLI work, and it would scale 90-100% all the time, then someone would have done that already. But there is no easy solution to this. A future multi-die GPU wouldn't work like Crossfire/SLI works today (no matter if management is in software or hardware). It would have to be much smarter then that, and allow the chips to work together much more closely, sharing caches, VRAM and all that stuff. This is an exceptionally hard topic to solve - and most importantly, much harder then for CPUs, because CPU cores are designed to work relatively independently.
Yah - I strongly suggest people read the first white paper I linked here: https://research.nvidia.com/sites/default/files/publications/ISCA_2017_MCMGPU.pdf It's definitely more complex to do MCM on a GPU than CPU due to extremely complex scheduling requirements. Also like you said it operates nothing like modern SLI/Crossfire. It's probably a few generations out and even in their theoretical approaches it still doesn't scale 100% and requires an incredibly high bandwidth bus - far more than what infinity fabric is currently capable of.
data/avatar/default/avatar05.webp
Almost 5 years of pretty much rebranding the same thing, good job intel /s
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
fantaskarsef:

While you are basically right, and I for one am often not impressed by useless advances that are there to just sell this year's model over last year's, but with Intel the opposite is true: Intel has made a habit of offering no to little improvement and / or innovation. So don't ask me if I'd have a 60% lower power CPU with AMD, ask Intel if we couldn't have Intel's performance at 60W less if they finally advanced in production nodes. And there the answer could be yes. So your statement should actually go towards Intel, not people who'd want Intel to offer improvement. 😉
exactly.
https://forums.guru3d.com/data/avatars/m/232/232349.jpg
Aura89:

....lol It's posts like this when i wish these forums had a downvoting system, there's so much facepalm here. I mean, sure, you're free to do as you want, but there's no situation...ever, that 47 fans would be needed in a system. You could have a dual-socket 250 watt per CPU with 3 titans, and still not need 47 fans....no matter how slow, no matter how fast. Plus, 47 fans, in any configuration, due to inherently mish-mash airflow you'd have to have, would have audible sound no matter what. Then, there's the validity of your statement. How, pray tell, do you have 47 fans in a PC? The amount of space 47 fans would take would mean you have no case, instead, your case is just one conglomerate of fans. I'm having a hard time even finding a case with more then 10 fan mounts, there are some, but, lets say you got one with 10 fan mounts, and you got 4 GPUs with 3 fans each, and two CPUs with 2 fans each, that gives you....26 fans, where's the other 21 fans? Even if you bumped the case up to 20 fan mounts(like the Thermaltake Core X9 listed above, or is it 23 fan mount capable on that one? i'm reading 20 according to the description, but it could be 23), you'd still be 16 fans short (or 13 if it is indeed 23 on the Thermaltake Core X9) But i guess i can't rule out the possibility that what you're saying is true, as this guy has 66 fans https://www.wired.com/images_blogs/gadgetlab/fan_pc_3.png Very um.....practical.....and useful.....not overkill for the sake of overkill at all, definitely not useless And this isn't even mentioning that depending on your wattage that your fans are running at, you're using even more wattage, which again i get you say you don't care about, but that's another 47-94 extra watts right there, supposedly, to cool things down...... Hey guys, look at my car! most cars have 4 wheels, but mine.....it's got 8! https://s1.cdn.autoevolution.com/images/news/gallery/this-toyota-has-way-too-many-wheels_4.jpg Or better yet, look at my bicycle! https://farm1.staticflickr.com/65/196422054_6f7018270e_b.jpg OH YEAH BABY LOOK AT ME!
Wow..... You were interested weren't you?!? I guess yeah if you must know the computer is in the pic there guy and it takes that to cool the radiators sandwiched between them..... for anything though thanks for the laugh.... Not sure if you're serious or not, but you did take the time to post all of this... Good luck on your down voting there guy.... FACKIN LOSER......?!?