Intel Halts Xeon Phi accelerator Knights Hill Development
Click here to post a comment for Intel Halts Xeon Phi accelerator Knights Hill Development on our message forum
Clawedge
Raja has spoken!!!
rl66
-Tj-
Texter
schmidtbag
I can't say I'm surprised by this. Considering Intel's approach against Epyc (or just Epyc in of itself) Knights Hill just doesn't make sense. And yes, I'm aware they're for servers - in the server market, these products are too niche.
To my understanding, this wasn't meant to be a GPU, but rather a large cluster of x86 Atom CPUs as an AIB.
But yes, it does beg the question why Intel didn't just use their existing GPU architecture for a discrete card and improve upon it. Despite what people think, it is actually decent, but we just happen to get really underwhelming and crippled variants. I wouldn't run out and buy an Intel GPU for gaming, but I'd be open to one for workstations, OpenCL, and transcoding.
rl66
schmidtbag
https://code.msdn.microsoft.com/windowsdesktop/NVIDIA-GPU-Architecture-45c11e6d
Scroll down to the part that says "2. Hardware Architecture". You can see in the diagrams how each CPU core gets its own dedicated L1+L2 cache and registers. Meanwhile, GPUs do clusters instead.
That's what Nvidia's Tesla or AMD's FirePro S series are; they're still GPUs but intended only for parallelization. Those still have all the capabilities necessary to do 3D rendering if you really wanted them to, without having to install any 3rd party drivers. Xeon Phi is not a GPU, it's a cluster of CPUs. Xeon Phis are designed to run x86 code, and to my understanding, their drivers are not capable of graphics rendering (without the need of 3rd party software renderers). With actual GPUs, you can use something like OpenCL and it works for any hardware that supports it out of the box. Though Xeon Phi supports OpenCL, there are caveats. Intel seems to prefer consumers use their own proprietary compilers.
Keep in mind that the main difference between a many-core CPU and a GPU is how they're designed to handle calculations. GPUs are designed to process massively-parallel tasks; they're awful at multitasking. CPUs, meanwhile, are best for multitasking rather than parallelization. This is why CPUs use things like Hyper Threading while GPUs don't/shouldn't. This has good visual representations why many-core CPU are fundamentally different than GPUs: