Der8auer delids Intel 12 and/or 18 core Skylake-X CPUs

Published by

Click here to post a comment for Der8auer delids Intel 12 and/or 18 core Skylake-X CPUs on our message forum
https://forums.guru3d.com/data/avatars/m/229/229509.jpg
Why would anyone pay $2K for what is an inferior product?
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
tunejunky:

i do not dispute what you are saying. however, it applies to the mass market, not the HEDT market with unlocked processors that Intel is "shocked, so shocked that overclocking may occur". this is a market that Intel *created*, which usually shows off their latest and greatest before trickling down to the mass market locked processors.
The vast majority of HEDT parts end up in workstations, not overclocked gamer computers, so I'd argue that workstation users would prefer a more reliable chip to one that has potential to fail due to thermal cracking. Also, X99 was a generation behind the mainstream throughout its entire product cycle, and x299 is continuing that tradition, so I'm not sure why you think it's "showing off their latest and greatest".
CPC_RedDawn:

As someone who delid their i7 7700K and saw INSANE temp drops of around 28C this idea that TIM is ok is a joke. It was like dust taking it off!! Literally some of the cheapest stuff I have ever seen. I used some thermal grizzly liquid metal between the die and IHS, I reglued it with high temp silicon, and then used IC Diamond ontop of the IHS. 28C idle temps are full load at stock clock speeds is insane! This not only allows for less voltages to be used, it keeps the chip healthy and lasting longer. We all know that overclocks can become unstable over time and needing more volts. The fact that AMD is using solder on every single chip with Ryzen, TR, and EPYC, and Intel is using basic TIM on every single chip but selling them at nearly 50% more when they only perform about 15-20% better in single IPC scenarios if you are lucky. Intel, at the moment, is a complete joke, their entire lineup is simply not worth it. At this point in time I can and will not recommend Intel CPU's to anyone, from gamer to production. AMD simply have Intel by the balls, in terms of price, performance, and future proofing. With AM4 being supported until at the very least 2020 and rumours of 10 core CPU's coming to AM4 as well. Intel are in some seriously bad waters. All Intel can hope for now is that AMD don't make strides in the server market, or the OEM market. If they do I can see Intel going the way of IBM and becoming a server only company.
It's been shown multiple times that the majority of temperature decrease on a delid comes from the removal of the gap and not because of the quality of the TIM. Further, Der8auer has delidded soldered 5960x's and saw 6-9 degree temp drops by swapping to liquid metal and most x299 chips only see a 10-15 degree drop from delidding. So at most the delid difference is only a few degrees with the latest Intel x299 processors. And again, without knowing the failure rates due to thermal problems, or knowing how much the increased heat affects the life span of the CPU, we have no way of knowing if the trade is worth it or not - especially over the 100M+ processors that Intel ships. Bringing up AMD's use of solder is also semi-irrelevant for a few reasons - for one they don't control the fabs, so we have no way of knowing if they are requesting retooling from GF/Samsung and not getting it. Also they may not need it - if anyone had actually bothered to read the Intel paper on the thermal cracking, it points out that heat density is the main cause of the problem. Intel's chips, especially with AVX2/512 use far more power per mm2 than AMD's and thus generate far more heat than AMDs per a given area. Differing processes and manufacturing methods of the chips might also lead to whether it's necessary or not - Intel's 12nm is far denser than GF/Samsung's 14nm. It's possible that due to the increased density that Intel only recently crossed a threshold where it was determined that TIM method was more reliable given the number of units shipped. Again, I don't know if this is the case or not - but a lot of it makes sense given what Intel and others have said about solder cracking issues. It also bothers me that everyone is so quick to write TIM off as a "Intel being cheap" and not even discussing any of the reasons why it might make sense overall to switch to it - especially when the cost of retooling/validating all their assembly equipment for TIM/Glue probably cost far more than what they are saving by using TIM thus far.
https://forums.guru3d.com/data/avatars/m/270/270233.jpg
Denial:

The vast majority of HEDT parts end up in workstations, not overclocked gamer computers, so I'd argue that workstation users would prefer a more reliable chip to one that has potential to fail due to thermal cracking. Also, X99 was a generation behind the mainstream throughout its entire product cycle, and x299 is continuing that tradition, so I'm not sure why you think it's "showing off their latest and greatest".
And yet all their previous HEDT chips used solder - it's only with Skylake-X that they've switched to using paste. Their Xeon server chips also still use solder so reliability is not in question (if solder was unreliable then the Xeons would be the first to use paste).
Denial:

Again, I don't know if this is the case or not - but a lot of it makes sense given what Intel and others have said about solder cracking issues. It also bothers me that everyone is so quick to write TIM off as a "Intel being cheap" and not even discussing any of the reasons why it might make sense overall to switch to it - especially when the cost of retooling/validating all their assembly equipment for TIM/Glue probably cost far more than what they are saving by using TIM thus far.
Intel has been using paste for years on their consumer chips, so they already have the equipment for it. As for the reason, my theory is that they want to limit overclocking on their consumer and HEDT chips. There's a reason why the Sandy Bridge CPUs were so beloved - they were fantastic overclockers. The problem is that people will not upgrade to newer CPUs if the one they have can overclock higher, so they began using paste to limit the OC headroom. It's also obvious that Skylake-X infringes on what is traditionally Xeon territory, and Intel wants to do what they can to deter potential Xeon customers from buying the cheaper Core i9 chips.
https://forums.guru3d.com/data/avatars/m/115/115462.jpg
Having to delid a $2000 CPU to get better thermals is absolutely insane. At that ultra premium price tag Intel should have taken care of everything themselves... they are at such a level of overconfidence that deserved the trashing (and hopefully AMD will keep it up with next generation too) they got.
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
KissSh0t:

I know this is probably a stupid question, but I'm honestly not sure.... do Intel make cpu's for gamers that do not have the integrated gpu?
That depends on what you consider a CPU for a gamer... it's my main reason to not use the "mainstream" platforms but have been sticking to the "enthusiast" platform of Intel for my last two rigs (went LGA1366 and now LGA 2011-3). Honestly, I wouldn't bother with a CPU that has an integrated GPU personally, I don't like paying for something I don't use (as well as "wasting" die space and probably having worse thermal dissipation when overclocked).
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
D3M1G0D:

And yet all their previous HEDT chips used solder - it's only with Skylake-X that they've switched to using paste. Their Xeon server chips also still use solder so reliability is not in question (if solder was unreliable then the Xeons would be the first to use paste). Intel has been using paste for years on their consumer chips, so they already have the equipment for it. As for the reason, my theory is that they want to limit overclocking on their consumer and HEDT chips. There's a reason why the Sandy Bridge CPUs were so beloved - they were fantastic overclockers. The problem is that people will not upgrade to newer CPUs if the one they have can overclock higher, so they began using paste to limit the OC headroom. It's also obvious that Skylake-X infringes on what is traditionally Xeon territory, and Intel wants to do what they can to deter potential Xeon customers from buying the cheaper Core i9 chips.
As I said half a dozen times over multiple threads, Intel's technical explanation for the change is heat density - Skylake-X can use nearly 40% more power (With AVX512) in a 10% increased die size over Broadwell-E and that technical brief came out over a decade ago, so it's not like they are retroactively covering their tracks. They explained that as transistors shrunk and power consumption went up but die size and thus area to cool decreased, it would become more of a problem. They explained that they had methods of improving the solder to fix the issue, which explains why they stuck with solder for some time after the paper was published. I know they've been using paste for years - I delidded my 3770K, 4790K and now my 7820x. Sandybridge was a fantastic overclocker because it was the last Planar chip out of Intel. It was talked about multiple times prior to Ivybridge's launch that overclocking would take a hit due to the transition to Finfet. http://www.anandtech.com/show/5771/the-intel-ivy-bridge-core-i7-3770k-review/4 Also on my three separate delidded chips the maximum overclock I was able to get didn't change much after delidding. My 3770K showed the largest improvement, 4.5 - 4.7. My 4790K only went from 4.7 to 4.8 and my 7820x is stuck at 4.8 without me blowing past what I consider safe voltage. So if their idea was to limit overclocking, it's not very effective. Xeon-W isn't out yet, which is the Xeon's based on the Skylake-X packages, I don't think anyone knows if they are soldered or not yet.
Solfaur:

Having to delid a $2000 CPU to get better thermals is absolutely insane. At that ultra premium price tag Intel should have taken care of everything themselves... they are at such a level of overconfidence that deserved the trashing (and hopefully AMD will keep it up with next generation too) they got.
You can delid a soldered CPU and get better thermals and with x299 the delid difference is far less than previous architectures. Edit: I want to be clear. I don't think the x299 chips are good value - I don't even consider my 7820x to be good value. I think the 1700x is the better choice and TR if you're going above 8 cores. But I also don't subscribe to the idea of "I feel like Intel is a corrupt company and because of that this why I think they are doing 'x' ". I don't know whether the use of TIM is good or bad over a large scale of processors. For me, I'd obviously prefer a soldered, working processor - but for large customers, or Intel's market as a whole, it may make way more sense for Intel to favor reliability over the concerns of a small portion of overclockers. Without knowing the numbers on failure rates, life-spans of working CPUs, etc I don't see how anyone can truly conclude anything, other then their hatred for Intel - which is arguably fair because they are a garbage company, but that doesn't make a conclusion valid.
https://forums.guru3d.com/data/avatars/m/268/268759.jpg
airbud7:

Its^ Skylake-X
yeah, then "Skylake X" is an modded Kabylake (ex. 7600/7700K) CPU, then Skylake and Kabylake are excatly the same ark lol
https://forums.guru3d.com/data/avatars/m/269/269912.jpg
KissSh0t:

I know this is probably a stupid question, but I'm honestly not sure.... do Intel make cpu's for gamers that do not have the integrated gpu?
Short answer is no. And remember the only stupid question is the question not asked. 😉
https://forums.guru3d.com/data/avatars/m/270/270233.jpg
Denial:

As I said half a dozen times over multiple threads, Intel's technical explanation for the change is heat density - Skylake-X can use nearly 40% more power (With AVX512) in a 10% increased die size over Broadwell-E and that technical brief came out over a decade ago, so it's not like they are retroactively covering their tracks. They explained that as transistors shrunk and power consumption went up but die size and thus area to cool decreased, it would become more of a problem. They explained that they had methods of improving the solder to fix the issue, which explains why they stuck with solder for some time after the paper was published.
They can easily accommodate the increased heat from AVX-512 by lowering the base clock speed (note that the base clock on the Core i9 is already far below the turbo speed). Note that they also increased the core count dramatically for the Core i9 so the die size shouldn't be an issue. Also, if increased heat is the problem then it would hardly make sense to go with a material which is less effective at transferring heat away from the CPU (that's some backwards thinking right there).
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
D3M1G0D:

Also, if increased heat is the problem then it would hardly make sense to go with a material which is less effective at transferring heat away from the CPU (that's some backwards thinking right there).
... what? The increased heat is a problem because the thermal cycling on indium solder itself creates microcracks - a problem that TIM doesn't have - which is the entire basis for Intel making the change. I don't understand how can argue a side and not even know the foundation of the argument? Intel is saying when you have a lot of heat in a very small area on a chip that's constantly cycling temperature that the solder forms voids and eventually cracks. When that occurs the chip can potentially fail. The hotter the chip and/or lesser the area, the more prone it is to occur. By swapping the solder to TIM, it completely removes the issue because TIM doesn't crack like solder does. The downside to the TIM is increased thermals - but like I said, we have absolutely zero information on how that affects the longevity of the chip. For all I know Intel ran a study, found that by using TIM they save 100K processors a quarter from microcracking, found the downside to be that chips only last 12 years instead of 15 under normal use conditions. And in that case, what exactly is the issue? That a small portion of their users are uncomfortable knowing their processors are running slightly hotter? That an even smaller handful can't get to 4.9Ghz but have to settle at 4.8? Those don't seem like significant tradeoffs for a massive reduction in failed chips.
https://forums.guru3d.com/data/avatars/m/270/270233.jpg
Denial:

... what? The increased heat is a problem because the thermal cycling on indium solder itself creates microcracks - a problem that TIM doesn't have - which is the entire basis for Intel making the change. I don't understand how can argue a side and not even know the foundation of the argument? Intel is saying when you have a lot of heat in a very small area on a chip that's constantly cycling temperature that the solder forms voids and eventually cracks. When that occurs the chip can potentially fail. The hotter the chip and/or lesser the area, the more prone it is to occur. By swapping the solder to TIM, it completely removes the issue because TIM doesn't crack like solder does.
As I said before, if reliability was an issue then Intel would be using TIM on their Xeon chips. To "fix" this issue only for their consumer chips and not their enterprise chips makes no sense whatsoever. Until Intel starts using paste on their Xeon chip, I refuse to believe this explanation.
Denial:

The downside to the TIM is increased thermals - but like I said, we have absolutely zero information on how that affects the longevity of the chip. For all I know Intel ran a study, found that by using TIM they save 100K processors a quarter from microcracking, found the downside to be that chips only last 12 years instead of 15 under normal use conditions. And in that case, what exactly is the issue? That a small portion of their users are uncomfortable knowing their processors are running slightly hotter? That an even smaller handful can't get to 4.9Ghz but have to settle at 4.8? Those don't seem like significant tradeoffs for a massive reduction in failed chips.
And for all we know, Intel ran a study that found that using TIM would save them $100K a year :P. I see no evidence whatsoever to suggest that chips are failing because of the solder (nor even any data to suggest higher temperatures from a failing solder). Where is the proof that a massive number of soldered chips are failing? On what basis are you making this assumption?
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
How ironic - Intel talks trash that AMD figuratively glues together dies, and yet here Intel is literally gluing together dies.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
D3M1G0D:

As I said before, if reliability was an issue then Intel would be using TIM on their Xeon chips. To "fix" this issue only for their consumer chips and not their enterprise chips makes no sense whatsoever. Until Intel starts using paste on their Xeon chip, I refuse to believe this explanation. And for all we know, Intel ran a study that found that using TIM would save them $100K a year 😛. I see no evidence whatsoever to suggest that chips are failing because of the solder (nor even any data to suggest higher temperatures from a failing solder). Where is the proof that a massive number of soldered chips are failing? On what basis are you making this assumption?
I already addressed a lot of these questions in my previous posts in this thread - some of which you already quoted and responded to. As I said before the new generation of Xeon chips are not out yet. I haven't seen any information on whether they are soldered or not. I also don't know the power consumption of them or if they cross any internal threshold that Intel has determined for microcracking vs market sales. I don't know if Intel purposely rolled out solder in a tiered method to rule out any unseen issues prior to using it in their server class chips. I don't know if transistor densities of HEDT platforms with cache sizes/avx registers pushed portions of the chip over those internal thresholds in recent generations. I don't know if Intel is going to continue soldering their Xeon chips but requires an extra level of x-ray validation on the solder, a cost that's rolled into Xeon's higher margins, for the sake of increased longevity. I don't even know what that longevity is at specific temperatures or how the increased temperature from swapping to TIM changes it. I don't know if a massive number of chips are failing, I never stated that they were. I provided a source that said .33% of Intel chips fail, which is ~300K chips a quarter. I don't know what portion of those failures are related to microcracking, which I also mentioned. What I do know is that solder issues due to thermal cycling/negligent material choices are pretty well-known issues in the industry. It's been the cause of multiple high-profile failures in products previously, including the Xbox RROD and Nvidia's 65/55nm chip failures. I know that a long time ago Intel published a paper that described the issue of thermal cycling and the indium solder they were using on their processors. I know that Intel said they could mitigate this issue by changing the material the solder was made of - but that it would eventually be an issue again as chips shrunk. I know that Intel shrunk its chips and I know that the power density of a modern Intel processor given a specific area is much higher than ever before - which was the cause of the initial issue as stated in the technical paper. I also know that having a team of ME's run a study on TIM, creating a pilot program to test, retool fabrication plants and validating all of the prior costs a lot more than 100K. Probably closer to $10-15M overall. So basically there are a ton of unknowns, a lot of evidence in Intel's favor, a lot of non-evidence in the favor of people just saying "THEY ARE DOING IT TO SAVE A BUCK!!!" and all I'm saying is that I'm on the fence because I don't draw conclusions on a whim based on my feelings for a tech company, which is clearly what 90% of the people posting in this thread are doing. I don't know whether going to TIM is a cost cutting measure, a measure to cut down on defective chip rates, or both, which is the most likely case. It just bothers me that people are able to form these definitive conclusions without knowing any of the background about what goes into decisions like this.
https://forums.guru3d.com/data/avatars/m/270/270233.jpg
Denial:

So basically there are a ton of unknowns, a lot of evidence in Intel's favor, a lot of non-evidence in the favor of people just saying "THEY ARE DOING IT TO SAVE A BUCK!!!" and all I'm saying is that I'm on the fence because I don't draw conclusions on a whim based on my feelings for a tech company, which is clearly what 90% of the people posting in this thread are doing. I don't know whether going to TIM is a cost cutting measure, a measure to cut down on defective chip rates, or both, which is the most likely case. It just bothers me that people are able to form these definitive conclusions without knowing any of the background about what goes into decisions like this.
On the fence? Really? I would say that you are clearly and unequivocally taking the side of Intel here (more than anyone else, in fact). You are blindly accepting whatever they say and coming up with reasons to justify it. If soldering was as serious an issue as you claim then we should see articles mentioning how a large number of Xeon chips are failing, far out of proportion to Intel's consumer chips (all of Intel's consumer chips use paste). Frankly, I see no evidence that this is the case - if anything, I see the opposite, as shown by Puget Systems: https://www.pugetsystems.com/labs/articles/Most-Reliable-PC-Hardware-of-2016-872/ Total Failure Rate: Intel Core i3/i5/i7 1.0% Intel Xeon E3/E5 0.32% We should also see evidence that a large number of AMD's consumer chips are failing compared to Intel's consumer chips, since all their chips are soldered. So far, I've heard nothing of the sort, and I'm not holding my breath waiting to find out.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
D3M1G0D:

On the fence? Really? I would say that you are clearly and unequivocally taking the side of Intel here (more than anyone else, in fact). You are blindly accepting whatever they say and coming up with reasons to justify it. If soldering was as serious an issue as you claim then we should see articles mentioning how a large number of Xeon chips are failing, far out of proportion to Intel's consumer chips (all of Intel's consumer chips use paste). Frankly, I see no evidence that this is the case - if anything, I see the opposite, as shown by Puget Systems: https://www.pugetsystems.com/labs/articles/Most-Reliable-PC-Hardware-of-2016-872/ Total Failure Rate: Intel Core i3/i5/i7 1.0% Intel Xeon E3/E5 0.32% We should also see evidence that a large number of AMD's consumer chips are failing compared to Intel's consumer chips, since all their chips are soldered. So far, I've heard nothing of the sort, and I'm not holding my breath waiting to find out.
Me playing devil's advocate and pointing out obvious flaws in an argument that is extremely one sided is not "unequivocally taking the side of Intel" - that statement is ridiculous, especially given my post history on the subject and in general. I didn't say soldering was a serious issue, I said it's a potential issue that Intel has identified in the past, I used the words "potential" "maybe" "I don't know" "presumably" etc many times in this thread - which is part of my point, there are like a billion unknowns. I don't see why we'd see articles mentioning a large number of Xeon chips failing - you see the .32% failure rate, a rate I quoted in this thread from the exact same source, that doesn't seem like a large percentage, it's you that keep's adding "large" and "massive number". Consumer chips failing at a higher rate can be due to any number of causes - lesser validation requirements on consumer based chips, operating environments, etc. I chose the Xeon failure rate as my basis for the initial argument for this reason - it's the best case scenario. I don't see why we'd see a large number of AMD chips failing, any greater than what Intel's chips are at already. I also mentioned multiple times in this thread and several other times in other threads that AMD's chips do not support AVX512, have half the number of registers for AVX2 and use significantly less power than Intel per given die space, especially with Ryzen (Did Bulldozer ever even get AVX2?) and thus would have a lower rate of failure due to this issue and because of that may not require the steps Intel is taking - nor do they ship the same number of processors. And you say that I'm blindly accepting Intel and coming up with reasons - I'm not. I'm coming up with holes in the argument that "Intel does it to save a buck" or "Intel does it to limit overclocking". Where is the evidence that the TIM costs less than solder? Even if you do present that, where is the evidence that the cost offsets the change in production lines? Where is your evidence against all the other gaping holes I pointed out? There is none - it's all just based on "i hate intel" which is fine - I hate them too - just not enough to baselessly form conclusions based on nothing. Regardless, I'm repeating myself over and over at this point - I could have composed the above from quotes of myself that already existed in the thread. So this is going to be my last one.
https://forums.guru3d.com/data/avatars/m/262/262208.jpg
Hi there Personally I wouldn't delid my CPU although there reasons for that from lower temperatures and lower voltages as well but after suffering from CPU memory controller failure or IMC on my 5960x then I wouldn't do that on my chips but I'm mostly running soldered CPU Intel RMA has been pretty fast and I received new 5960x after 2 days which is awesome service from Intel Hope this helps Thanks, Jura
https://forums.guru3d.com/data/avatars/m/266/266726.jpg
Aura89:

Not sure that makes any sense. The larger portion underneath of it would allow for a larger die, the 14nm wouldn't have anything to do with anything, and the top layer is completely and totally pointless in either case, because to "allow" for a larger die does not mean that a smaller die would require a 2nd, smaller top layer.
I would disagree that its totally pointless, especially if intel is pasting a chip that is already in production with a different package on to x299. kabylake-x is not multilayered which only creates more questions as to why. since you would think it would make more sense for kabylake to be pasted on to the 2066 socket. [spoiler] https://i.imgur.com/RBTBGiP.jpg [/spoiler]
https://forums.guru3d.com/data/avatars/m/197/197287.jpg
Denial:

Again, I don't know if this is the case or not - but a lot of it makes sense given what Intel and others have said about solder cracking issues.
It's funny how "solder cracking issues" were never an issue until Intel went TIM. Not to mention that if it was such an "issue", and cost saving, you'd expect AMD would definitely have done it, most likely before Intel for that matter, since they are the ones needing to penny pinch. Sorry, but all these conspiracy theories you keep pushing out in this thread is a bunch of complete and total nonsense fed by Intel and gobbled up by people who want to believe Intel is their lord and savior.
https://forums.guru3d.com/data/avatars/m/231/231931.jpg
Denial:

You can delid a soldered CPU and get better thermals and with x299 the delid difference is far less than previous architectures.
No it's not. It's the same crap as before. [youtube=f3qdJQxXbUg] Here's a video of a guy gettings a 21c reduction from 100c to 79c. Hell it could be even more than that if 100c throttling didnt occur. So no, it's a great idea to get rid of that junk if possible.