Intel Halts Xeon Phi accelerator Knights Hill Development

Published by

Click here to post a comment for Intel Halts Xeon Phi accelerator Knights Hill Development on our message forum
https://forums.guru3d.com/data/avatars/m/163/163032.jpg
Raja has spoken!!!
https://forums.guru3d.com/data/avatars/m/175/175902.jpg
SweenJM:

That's pretty interesting. it was a pretty big project for a good while. i have always wondered why intel has never gotten serious about graphics.
Maybe because the XEON Phi is even more "niche" product than NVidia (1st place) and AMD (2nd place)... it is very good for computation but product from green and red are realy much more easy, much more versatile... arm/x86/x64/custom config... the experience they have is a real advange over intel wich is pretty young in this exercice. About Intel in graphic... at each release of their IGP it is better, now you can no more say "their IGP are so weak that you can do nothing with them"... at one point i guess they will do some great stuff.
https://forums.guru3d.com/data/avatars/m/242/242471.jpg
Clawedge:

Raja has spoken!!!
Was about to say its something with Raja for sure, they don't need that anymore.. I think Raja will.make some wonders in intel gpu world. Can't wait!
https://forums.guru3d.com/data/avatars/m/202/202673.jpg
SweenJM:

i have always wondered why intel has never gotten serious about graphics.
Oh they were serious all right nearly a decade ago...what did they spend on LRB? $5 Billion? Apparently hooking up a bunch of Pentiums on a single die wasn't competitive enough for graphics, even though they were so boastful of their accomplishments at first...Xeon Phi was the Larrabee salvage job.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
I can't say I'm surprised by this. Considering Intel's approach against Epyc (or just Epyc in of itself) Knights Hill just doesn't make sense. And yes, I'm aware they're for servers - in the server market, these products are too niche.
SweenJM:

That's pretty interesting. it was a pretty big project for a good while. i have always wondered why intel has never gotten serious about graphics.
To my understanding, this wasn't meant to be a GPU, but rather a large cluster of x86 Atom CPUs as an AIB. But yes, it does beg the question why Intel didn't just use their existing GPU architecture for a discrete card and improve upon it. Despite what people think, it is actually decent, but we just happen to get really underwhelming and crippled variants. I wouldn't run out and buy an Intel GPU for gaming, but I'd be open to one for workstations, OpenCL, and transcoding.
https://forums.guru3d.com/data/avatars/m/175/175902.jpg
schmidtbag:

To my understanding, this wasn't meant to be a GPU, but rather a large cluster of x86 Atom CPUs as an AIB.
it is... but made for computing
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
rl66:

it is... but made for computing
That's what Nvidia's Tesla or AMD's FirePro S series are; they're still GPUs but intended only for parallelization. Those still have all the capabilities necessary to do 3D rendering if you really wanted them to, without having to install any 3rd party drivers. Xeon Phi is not a GPU, it's a cluster of CPUs. Xeon Phis are designed to run x86 code, and to my understanding, their drivers are not capable of graphics rendering (without the need of 3rd party software renderers). With actual GPUs, you can use something like OpenCL and it works for any hardware that supports it out of the box. Though Xeon Phi supports OpenCL, there are caveats. Intel seems to prefer consumers use their own proprietary compilers. Keep in mind that the main difference between a many-core CPU and a GPU is how they're designed to handle calculations. GPUs are designed to process massively-parallel tasks; they're awful at multitasking. CPUs, meanwhile, are best for multitasking rather than parallelization. This is why CPUs use things like Hyper Threading while GPUs don't/shouldn't. This has good visual representations why many-core CPU are fundamentally different than GPUs: https://code.msdn.microsoft.com/windowsdesktop/NVIDIA-GPU-Architecture-45c11e6d Scroll down to the part that says "2. Hardware Architecture". You can see in the diagrams how each CPU core gets its own dedicated L1+L2 cache and registers. Meanwhile, GPUs do clusters instead.