Ice Lake for Mainstream Delayed to 2020

Published by

Click here to post a comment for Ice Lake for Mainstream Delayed to 2020 on our message forum
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
austin865a:

True, but does it cost more to FAB everything to make a smaller chip? I would think a tried and true and refined chip would cost less to make in the long run then always die shrinking chips.
good reasoning, but you forgot the marketplace. right now Intel is doing as you suggest with the low end chipsets. the smarter move is doing what Apple, AMD, and to a lesser extent Qualcomm have done; establish a partnership with the fab itself. and since all of the above design their own IC and/or have been in the fab business itself, their designs go to the strengths of the new process and front-loading their contract with capital investment is resulting in a lower cost per unit with the expense defrayed over several years. in other words, the money was well spent and is resulting in a lower cost per generation than has been experienced in the fab business. which is "freaking out" all other fabs without the deep pockets, partnerships, and technology. only Samsung is in a position that could be called favorable.
https://forums.guru3d.com/data/avatars/m/227/227853.jpg
vbetts:

I'm pretty sure it's not 10% faster single core? That's a lot bigger of a difference than it sounds. Also let's keep it civil.
Sure, no ill intent in it. I just find it amusing when people exaggerate performance numbers and base their whole argument on that wrong assumption. About the single core performance, the aggregate difference is around 10% in any case, hwbench says 12%: http://hwbench.com/cpus/intel-core-i7-8700k-vs-amd-ryzen-7-2700x Just depends on the app you're using but they're really not far off from one another on average.
https://forums.guru3d.com/data/avatars/m/266/266726.jpg
austin865a:

True, but does it cost more to FAB everything to make a smaller chip? I would think a tried and true and refined chip would cost less to make in the long run then always die shrinking chips.
typically the cost comes down pretty fast, though these days the cost per transistor hasnt changed a whole lot( after the inital risk production), you still get other benefits for smaller chips, mainly lower power consumption, higher frequencies,density increases which allow for more transistors per chip , great example of how this can be a problem, the gv100 (815mm^2)is built on a 16nm variant that allows them to make very large chips, but nvidia would be lucky to get 1 fully functional chip per wafer, you cannot make a chip bigger feasibly, even on a 16nm variant which at this point is a mature process node. with 7nm nvidia could easily double the number of transistors (not that it would be cheap).
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
xIcarus:

Sure, no ill intent in it. I just find it amusing when people exaggerate performance numbers and base their whole argument on that wrong assumption. About the single core performance, the aggregate difference is around 10% in any case, hwbench says 12%: http://hwbench.com/cpus/intel-core-i7-8700k-vs-amd-ryzen-7-2700x Just depends on the app you're using but they're really not far off from one another on average.
Then it means for this showcase there is tiny IPC difference. Intel has 9% clock advantage and 12% higher performance in this particular test set. So, IPC difference here is under 3%. In worst case scenario, AMD was sitting on their asses doing almost nothing and Zen2 will deliver 5% improved IPC, gaining tiny lead for this test set. And then they gain 10% clock from 7nm... matching intel's side in performance per core.
austin865a:

Good point. I tend to never look at the bigger image. Wile in the past, many company's that dug there heal in deep and kept refining a fab have come out on top in their niche markets. The Nexgen CPU was refined all the way to the AMD-k6-iii+ and was a killer chip. VIA's C series CPU was well know for its use in ultra low power AES embedded systems in Asia. There is a lot of Hdd controller chips still in use today that go back to the 90s, same for other small chip sets for USB, Ethernet and so on. But that model no longer works today like it did in the past. Today its all die shrinks, lower power draw and more cores for etch new gen. Everyone seems to want something new and far faster with lower power draws with slower software and less reliability.
It was always like that. That's why my 1st PC had 2MHz chip. People always want more. And old things reach level of toys. Today, arduino/rsPi are toys which blow out of water things of past. Arduino at tiny price does what big and expensive PLC did 20 years ago. New Raspberry Pi 3 costs 1/10th of what decent netbook does. And greatly outperforms anything with similar thermal envelope (power consumption) what came just a few years ago. People simply want more, that's what makes human. Amount of computational power in hands of mankind is bit crazy. Your home PC can deliver same results as supercomputer did in past. Where did those supercomputers of past got us? What are limits of those we have today? It's already a singularity. People just did not realize that, yet.