Intel Talks About Discrete Graphics Processor at ISSCC

Published by

Click here to post a comment for Intel Talks About Discrete Graphics Processor at ISSCC on our message forum
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
It's weird - as of writing, I probably dislike Intel more than everyone else in this thread, and yet I feel like I'm the only one who doesn't think this is going to be a failure. I know that most of you only care about gaming, but I'm pretty sure Intel isn't planning on marketing this as a gaming chip. I see this being a competitor to the Quadro and FirePro series. Despite what most of you think, Intel's IGPs aren't that bad, when you consider their energy efficiency and non-3D capabilities. It's been I think 3 years since the Iris Pro graphics have been released, and it still holds up very well. Up until Raven Ridge was released, pretty much the most compact, cheap, and energy-efficient way to build your own system capable of 4K video decode was with Intel's IGPs. To my recollection, their OpenCL performance is also relatively good. With proper dedicated memory, I think these GPUs will actually be worth getting, depending on your workload; bad choice for gamers though. As for the drivers, here's what I find a bit funny: In Windows, they are very neglectful of 3D and compute, but video decode is great. In Linux, they are very attentive of 3D and compute, but video decode is very limited.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
schmidtbag:

It's weird - as of writing, I probably dislike Intel more than everyone else in this thread, and yet I feel like I'm the only one who doesn't think this is going to be a failure. I know that most of you only care about gaming, but I'm pretty sure Intel isn't planning on marketing this as a gaming chip. I see this being a competitor to the Quadro and FirePro series. Despite what most of you think, Intel's IGPs aren't that bad, when you consider their energy efficiency and non-3D capabilities. It's been I think 3 years since the Iris Pro graphics have been released, and it still holds up very well. Up until Raven Ridge was released, pretty much the most compact, cheap, and energy-efficient way to build your own system capable of 4K video decode was with Intel's IGPs. To my recollection, their OpenCL performance is also relatively good. With proper dedicated memory, I think these GPUs will actually be worth getting, depending on your workload; bad choice for gamers though. As for the drivers, here's what I find a bit funny: In Windows, they are very neglectful of 3D and compute, but video decode is great. In Linux, they are very attentive of 3D and compute, but video decode is very limited.
I did build intel drivers on linux multiple times. Number of additional components needed is almost unbelievable. So it seems to be wise to make scripts for each part and chain them. But next build and other set of dependencies. Again and again. Worst thing is that they do not list those dependencies on full, so you are going to find out one missing dependency at a time as you compile. (Existence of intel's tool which can deliver it automatically is not much helping since it is just for some distros.)
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Fox2232:

I did build intel drivers on linux multiple times. Number of additional components needed is almost unbelievable. So it seems to be wise to make scripts for each part and chain them. But next build and other set of dependencies. Again and again. Worst thing is that they do not list those dependencies on full, so you are going to find out one missing dependency at a time as you compile. (Existence of intel's tool which can deliver it automatically is not much helping since it is just for some distros.)
That's pretty typical of a lot of things you build yourself. I personally don't bother building anything that complex unless I'm given complete and coherent instructions. As an Arch Linux user, the AUR makes building from the latest source a lot less of an arduous process.
https://forums.guru3d.com/data/avatars/m/196/196284.jpg
schmidtbag:

It's weird - as of writing, I probably dislike Intel more than everyone else in this thread, and yet I feel like I'm the only one who doesn't think this is going to be a failure. I know that most of you only care about gaming, but I'm pretty sure Intel isn't planning on marketing this as a gaming chip. I see this being a competitor to the Quadro and FirePro series. Despite what most of you think, Intel's IGPs aren't that bad, when you consider their energy efficiency and non-3D capabilities. It's been I think 3 years since the Iris Pro graphics have been released, and it still holds up very well. Up until Raven Ridge was released, pretty much the most compact, cheap, and energy-efficient way to build your own system capable of 4K video decode was with Intel's IGPs. To my recollection, their OpenCL performance is also relatively good. With proper dedicated memory, I think these GPUs will actually be worth getting, depending on your workload; bad choice for gamers though. As for the drivers, here's what I find a bit funny: In Windows, they are very neglectful of 3D and compute, but video decode is great. In Linux, they are very attentive of 3D and compute, but video decode is very limited.
Look at the R5 2400G review and see where the i7 5775C sits in the iGPU charts. Iris Pro was pretty damn competitive with AMD's past iGPUa. It's OpenCL performance was actually amazing for an iGPU.
data/avatar/default/avatar18.webp
Can it run crysis ?
https://forums.guru3d.com/data/avatars/m/271/271131.jpg
eddieobscurant:

Can it run crysis ?
Yes, at 320x240 with 256 colors at 2fps ;-)
sykozis:

[...] Iris Pro was pretty damn competitive with AMD's past iGPUa. It's OpenCL performance was actually amazing for an iGPU.
I bought a laptop some years ago with Intel Iris (Pro?) 5200 (on i7-4770HQ), they gave me a STEAM code for GRID2 with it. Besides the fact I did not use it, it seems the gpu could manage GRID2, otherwise their giveaway would make no sense at all.
https://forums.guru3d.com/data/avatars/m/270/270169.jpg
sykozis:

Look at the R5 2400G review and see where the i7 5775C sits in the iGPU charts. Iris Pro was pretty damn competitive with AMD's past iGPUa. It's OpenCL performance was actually amazing for an iGPU.
Problem with the i7-5775C (and all the other Crystalwell using parts) is the insane cost of the on-package eDRAM (in addition to the added complexity it adds to construction and implementation by requiring an MCM package), and the fact that it's use was literally the only reason Iris was even remotely competitive with AMD at all. Basically, their iGPU arch was total garbage with no easy ways to fix it aside from a massive clean-slate revamp, but instead of starting said arch revamp way back then, they chose to kick the can down the road for a couple more years (till they just recently hired away Raja Koduri from AMD to finally deal with said can) by doing what AMD simply financially could not and throwing a bunch of expensive on-package memory at the problem instead so they could steal all the thunder from Carrizo's smoking iGPU performance (which was obviously not going to be a sustainable solution long term). After they beat AMD out of their tiny moment in the limelight by throwing a whole bunch of eDRAM (128MB!!!) at Broadwell's iGPU, they would proceed to quickly walk this line of progression back entirely over the next few years, as the math for putting 128MB of eDRAM on every high end CPU simply doesn't add up. ESPECIALLY when said eDRAM is being used as a L4 Cache by the CPU in Broadwell's case giving absolutely unheard of massive performance boosts in a number of areas (aka making it incredibly desirable, and thus harder to walk back/get rid of, the longer people have access to it). Thus, they removed the L4 Caching capabilities for all future eDRAM packing chips for no-technical reason whatsover as those kinds of gen to gen perf boosts are absolutely unacceptable to "5% increase every year to year" 2010's Intel (and would add pressure on them to continue their use & spread eDRAM throughout the stack rather than the opposite as they actually did), and then cut the size of the die in half, from 128MB down to 64MB for most of the Skylake models w/ Crystalwell to reduce costs, and by Kaby Lake that was the only eDRAM package size in use, and it's use could be found on a whopping ONE SKU. So from Broadwell where eDRAM usage was near ubiquitous in the mid-high end chips, in a big 128MB amount that also could be used by the CPU as L4 Cache, to a single Kaby Lake w/ Crystalwell model with 1/2 as much memory and no ability to use it as L4. And that finally brings us to today and Intel's current CPU lines, Kaby Lake-R & Coffee Lake which across the entire lineups of both arch's there's NOT A SINGLE SKU WITH eDRAM! That should tell you everything you need to know about the long-term sustainability/competitiveness of Intel's eDRAM iGPU solutions; there isn't any. It was too expensive for mass adoption then, and it's still too expensive for mass adoption now. The major differing factor today is that unlike Carrizo's GCN 2.0 based iGPU, the Vega part in Raven Ridge happens to be SO freaking far ahead of Intel's stuff, that simply adding 64MB, 128MB, and potentially even 256MB of on-package eDRAM wouldn't be enough for Intel's part to close the gap & come out on top, and thus the primary reason for doing it with Broadwell in spite of the significant cost outlay (to make AMD's technically superior iGPU tech look bad & tank the Carrizo launch, while simultaneously keeping their most needy client happy; aka Apple) no longer exists and I can't imagine we'll me seeing any more of Crystalwell in the future then we are right now (which is none). They hired away Raja to clean-slate a new GPU arch for just that very reason; so they wouldn't have to throw massive stacks of $ at stuff like Crystalwell so that their iGPU parts can compete with AMD's. I think the shift away from Crystalwell was also another major factor in Intel's decision to go in a completely different direction with Kaby Lake-G than they did for Broadwell, by licensing what's most likely a slightly cut down version of Vega Mobile (i.e. 20/24 CU's SKU's of a 28 CU chip) and 4GB of HBM that actually WILL perform at a level commensurate with their cost; unlike the horribly skewed cost/perf ratio (talking for iGPU usage, exclusively) of something like on-package eDRAM.
https://forums.guru3d.com/data/avatars/m/261/261885.jpg
Fox2232:

Intel can deliver whatever dGPU they want. People are not going to buy them for gaming till their drivers improve.
Driver is good, no issues here.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
Havel:

Driver is good, no issues here.
Sure they are. Except... Take feature set you can set up in AMD or nVidia driver and then compare it to intel. Intel has like 20% of settings?
https://forums.guru3d.com/data/avatars/m/196/196284.jpg
Fox2232:

Sure they are. Except... Take feature set you can set up in AMD or nVidia driver and then compare it to intel. Intel has like 20% of settings?
It's hard to ignore the fact that Intel's graphics driver "just works" though.... Of course, the iGPU lacks capability.... We won't really see how well Intel's graphics drivers really work until they release something that's actually capable of heavy 3D graphics rendering. Given the limited capability of Intel's iGPU, there's no reason to have any more settings than actually necessary.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
sykozis:

It's hard to ignore the fact that Intel's graphics driver "just works" though.... Of course, the iGPU lacks capability.... We won't really see how well Intel's graphics drivers really work until they release something that's actually capable of heavy 3D graphics rendering. Given the limited capability of Intel's iGPU, there's no reason to have any more settings than actually necessary.
True. Yet it remains to be seen if intel can deliver both dGPU performance and driver worth such performance. 10+ years ago AMD/ATi struggled with drivers, and even today people throw it into AMD's face.
https://forums.guru3d.com/data/avatars/m/196/196284.jpg
Fox2232:

True. Yet it remains to be seen if intel can deliver both dGPU performance and driver worth such performance. 10+ years ago AMD/ATi struggled with drivers, and even today people throw it into AMD's face.
Only time will tell. People have always been more critical of ATI/AMD than Intel or NVidia. AMD could release a new GPU arch tomorrow with a 10x performance improvement over the 1080Ti and people would still bash AMD over something.... Personally, I'm a fan of technology. I welcome Intel to enter the graphics market and I look forward to seeing what they deliver, if anything.... I'm just not holding my breath after Larrabee. Larrabee was supposed to be a graphics card. Ended up being an accelerator card instead.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
sykozis:

Only time will tell. People have always been more critical of ATI/AMD than Intel or NVidia. AMD could release a new GPU arch tomorrow with a 10x performance improvement over the 1080Ti and people would still bash AMD over something.... Personally, I'm a fan of technology. I welcome Intel to enter the graphics market and I look forward to seeing what they deliver, if anything.... I'm just not holding my breath after Larrabee. Larrabee was supposed to be a graphics card. Ended up being an accelerator card instead.
Best case scenario would be early release of compute capable GPU. Mass production at expense of reducing CPU production. Miners would be satisfied, we would have easier times getting AMD/nVidia's GPU. And less intel's CPUs = higher AMD sales.