We'll have Zen moments... and we will put pressure on NVIDIA, said Intel CEO.

Published by

Click here to post a comment for We'll have Zen moments... and we will put pressure on NVIDIA, said Intel CEO. on our message forum
https://forums.guru3d.com/data/avatars/m/270/270008.jpg
schmidtbag:

AMD will survive, but I predict they're going to take a proportionately greater loss than Nvidia. CUDA and the tensor cores are what really gives Nvidia's hardware an edge in the market. That and the fact Nvidia usually has the highest level of performance overall. Most people who buy AMD do so for lower prices and for being less proprietary. So here's the real challenge with Intel: they just simply won't have a technologically competitive product with Nvidia, but AMD's market ostensibly being 12x smaller than Nvidia's likely doesn't exactly sound like the most appealing market to investors, especially when you consider Intel just simply won't be taking 100% of AMD's marketshare. Intel is further challenged by trying to shake impressions from 10 years ago. You could argue that Intel's strategy is to undercut AMD's prices so much that it even entices Nvidia users to buy, simply for the good value. But that's going to be really hard in today's market and chip shortages. It doesn't end there though. Even if they have a 4K-ready GPU in-stock with good DXR performance and a competitive MSRP, you still have drivers to worry about, as a Windows user. Intel is notorious for prematurely abandoning their Windows GPU drivers when a new product is released, and their current Xe drivers leave a lot to be desired. As far as I'm concerned, most of us might as well skip these GPUs. They might be enticing to Linux users, so long as they can be bought for a reasonable price, but that's about it.
In the near term I agree. As more can be shoved into an APU due to node shrinks I do think on the mid to lower end APU's from both Intel and AMD will stick it to Nvidia. I'm personally not getting into Intel's first generation. I'm a little older and as such don't want any headaches. The second or third gen of there GPU's I would give them a serious look once they prove themselves. I do want all three to be completive long term. It would put an end Nvidia's proprietary shenanigan's to have three players. Devs will gravitate to more open solutions so they don't have to code stuff three times over.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
JamesSneed:

In the near term I agree. As more can be shoved into an APU due to node shrinks I do think on the mid to lower end APU's from both Intel and AMD will stick it to Nvidia.
Agreed, though APUs are a bit different in the sense that they're not affected by miners, so they're kind of in their own bubble anyway. Nvidia's APUs are all ARM-based, so, they cater to a different market. Bear in mind though that APUs are still heavily restricted by memory bandwidth, so fitting more GPU cores on a die isn't (and hasn't been) the problem.
I'm personally not getting into Intel's first generation. I'm a little older and as such don't want any headaches. The second or third gen of there GPU's I would give them a serious look once they prove themselves.
Yes, I would strongly recommend others follow in your steps. And I think when it comes to Intel, it really ought to only take a 2nd generation to get things right. They've been making GPUs for a while and contrary to what many claim, their GPUs have been pretty good for a while, they just haven't really pushed for high-performance rasterization. But, scaling things up is easier said than done, and their drivers aren't optimized, so that's why it's going to take a whole generation until we see a more polished result.
I do want all three to be completive long term. It would put an end Nvidia's proprietary shenanigan's to have three players. Devs will gravitate to more open solutions so they don't have to code stuff three times over.
Yup, though we're still a few years away from that. In the gaming world, Nvidia's proprietary shenanigans are subtle and don't really alienate alternative users. In the professional world, stuff like CUDA and OptiX are going to be a major challenge for Intel to defeat. Frankly, I blame AMD for this, where they didn't put enough development efforts into OpenCL; I only blame AMD because they're the only ones who actually had an incentive to do so. Nvidia obviously wouldn't want to, and everyone else (Intel, Qualcomm, Broadcom, TI, Samsung, VIA, etc) didn't have any GPUs powerful enough to make OpenCL a worthwhile time investment. Now though, Intel realizes they have to pick up the slack, and I suppose the good thing is they've definitely got the money to do it.
https://forums.guru3d.com/data/avatars/m/270/270008.jpg
schmidtbag:

Agreed, though APUs are a bit different in the sense that they're not affected by miners, so they're kind of in their own bubble anyway. Nvidia's APUs are all ARM-based, so, they cater to a different market. Bear in mind though that APUs are still heavily restricted by memory bandwidth, so fitting more GPU cores on a die isn't (and hasn't been) the problem. Yes, I would strongly recommend others follow in your steps. And I think when it comes to Intel, it really ought to only take a 2nd generation to get things right. They've been making GPUs for a while and contrary to what many claim, their GPUs have been pretty good for a while, they just haven't really pushed for high-performance rasterization. But, scaling things up is easier said than done, and their drivers aren't optimized, so that's why it's going to take a whole generation until we see a more polished result. Yup, though we're still a few years away from that. In the gaming world, Nvidia's proprietary shenanigans are subtle and don't really alienate alternative users. In the professional world, stuff like CUDA and OptiX are going to be a major challenge for Intel to defeat. Frankly, I blame AMD for this, where they didn't put enough development efforts into OpenCL; I only blame AMD because they're the only ones who actually had an incentive to do so. Nvidia obviously wouldn't want to, and everyone else (Intel, Qualcomm, Broadcom, TI, Samsung, VIA, etc) didn't have any GPUs powerful enough to make OpenCL a worthwhile time investment. Now though, Intel realizes they have to pick up the slack, and I suppose the good thing is they've definitely got the money to do it.
The APU side of things I have a feeling are going to get interesting for AMD down on 2nm when they move to GAAFET. Moving to GAAFET fixes SRAM voltage leakage a lot so SRAM scaling should jump up a ton plus you are on 2nm. I say that because we might see 512MB to 1GB Infinity caches being perfectly feasible. Yeah I know that sounds crazy right now but we are talking two full node shrinks over 7nm and a move from FinFET to GAAFET. Also its very probable we will see TSV's and separate SRAM dies just like Zen3D since you get about 2x higher density using TSMC's SRAM libraries on 7nm(They should be able to optimize densities a lot more on GAFFET). Anyhow I suspect a big APU push from AMD and Intel and not just the it has a dGPU to do office work push but actual good FPS gaming at 1080p and pretty good at 1440p. I'm guessing 4 years from now. For the OpenCL my hope is AMD and Intel work together on open standards which would eventually force Nvidia to do so as well. Developers would love it as it reduces costs and complexity and we would all benefit from a compatibility standpoint.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
JamesSneed:

The APU side of things I have a feeling are going to get interesting for AMD down on 2nm when they move to GAAFET. Moving to GAAFET fixes SRAM voltage leakage a lot so SRAM scaling should jump up a ton plus you are on 2nm. I say that because we might see 512MB to 1GB Infinity caches being perfectly feasible. Yeah I know that sounds crazy right now but we are talking two full node shrinks over 7nm and a move from FinFET to GAAFET. Also its very probable we will see TSV's and separate SRAM dies just like Zen3D since you get about 2x higher density using TSMC's SRAM libraries on 7nm(They should be able to optimize densities a lot more on GAFFET). Anyhow I suspect a big APU push from AMD and Intel and not just the it has a dGPU to do office work push but actual good FPS gaming at 1080p and pretty good at 1440p. I'm guessing 4 years from now.
Really hard to say honestly. I'm actually not really looking forward to it, because even at 2nm such a cache would be huge. Stacking the chips helps but not in production, so I suspect these could be really expensive. Meanwhile, the bigger the cache, the slower it gets. For now bigger caches are improving performance (at least of modern tasks) but there gets to be a point where if an APU really demands that much performance (or even just a CPU), what if DDR5 is already a total waste of time?
For the OpenCL my hope is AMD and Intel work together on open standards which would eventually force Nvidia to do so as well. Developers would love it as it reduces costs and complexity and we would all benefit from a compatibility standpoint.
OpenCL in and of itself doesn't need more work. It's fine. The reason Nvidia dominated with CUDA is because of their extremely good and comprehensive documentation and SDK. Nvidia made a lot of their own libraries, they have easy-to-read example code, and their documentation was written in a way that amateur programmers can understand it. I learned how to write a CUDA program in a matter of 2 days involving computer vision. It took me weeks to figure out just how to begin with OpenCL. People in open source communities get all in a hissy-fit whenever they see programs like Blender, BOINC, and Meshroom use CUDA but when you're a volunteer or part-time developer, you're going to learn what is most accessible, and CUDA is just the obvious choice when you're actually doing the development work. Currently, AMD is pushing a good effort with open compute standards, but really only at the enterprise level. I predict Intel is going to try hard to convince hobbyist programmers to use OpenCL
https://forums.guru3d.com/data/avatars/m/79/79740.jpg
Deep dive into Intels Sapphire Rapids, the next gen Xeon server chips (by Ian Cutress) [youtube=FWixK_dE9WA]