OpenCL and Vulkan benchmarks were run on Arc A770 and A750 GPUs.

Published by

Click here to post a comment for OpenCL and Vulkan benchmarks were run on Arc A770 and A750 GPUs. on our message forum
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
These results I find more interesting than most because there's a lot less optimization to do for compute workloads, meaning, these results more closely represent the true potential of these GPUs.
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
schmidtbag:

These results I find more interesting than most because there's a lot less optimization to do for compute workloads, meaning, these results more closely represent the true potential of these GPUs.
Aren't AMD's drivers always considered the reason for AMD underperfoming in these kind of workloads? I haven't yet heard anyone praising the Intel graphics card drivers, so it ought to be unknown how optimised they are throughout the whole spectrum of stuff a GPU can be put through.
data/avatar/default/avatar06.webp
Since OpenCL is basically dead at this point as Intel seems to be the only one who is actively developing OpenCL drivers (both AMD and NVIDIA have put these drivers in maintenance mode), those results aren't really that interesting. The Vulkan one, which is an API for which everybody is actively developing and updating their drivers, is much more interesting.
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
funny, nvidia and amd both have opencl 3.0 drivers.
data/avatar/default/avatar02.webp
Astyanax:

funny, nvidia and amd both have opencl 3.0 drivers.
Yes, but what you have to know is that when OpenCL 2.x was released NVIDIA didn't support any of the new features in that version. Then the Khronos group (the people who sort of manage the standard and a lot of others too) released OpenCL 3.x, which is not much different from version 2.x, except that all the 2.x features are optional now. So basically NVIDIA still only supports the 1.x feature set, but can call its drivers 3.x because of this. AMD on the other hand has dropped support for OpenCL on their CPU's completely, which was sort of the point of OpenCL (write compute once and run it on CPU, GPU and any other OpenCL compatible device). AMD GPU's are supported for OpenCL 2.x (and thus also 3.x), but internally AMD has switched to ROCm and HIP, which is sort of their version of NVIDIA's CUDA. HIP is structured almost identically to CUDA and there is even a cross-compilation tool that allows HIP code to be compiled on NVIDIA cards (by compiling it to a form that can be fed to NVCC, NVIDIA's CUDA compiler). So there is no interest at AMD to further develop OpenCL. That is why I stated that these companies have put OpenCL drivers in maintenance mode. No new developments are done, but bugs get fixed and any changes required to make them work on current OSses are still performed.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Kaarme:

Aren't AMD's drivers always considered the reason for AMD underperfoming in these kind of workloads? I haven't yet heard anyone praising the Intel graphics card drivers, so it ought to be unknown how optimised they are throughout the whole spectrum of stuff a GPU can be put through.
Not necessarily. AMD has actually been very competitive in OpenCL pretty much since TeraScale2 and they've done practically nothing to optimize until maybe a year ago. The reason why they're never chosen is because they lack CUDA, which not only limits what they can do but CUDA is also just so much better. Nvidia really optimized CUDA for their platform. They wrote most of the libraries themselves. Their documentation is excellent. CUDA is also significantly easier to implement. CUDA is so much better that many open-source developers use it despite the fact it requires closed-source binaries to work. In the past 3 years or so, AMD has realized how much money they've been losing out on in the server GPU market. As a result, they've been working on ROCm and HIP. They're basically playing catch up with Nvidia at this point. The good news is, they actually have a chance to catch up - AMD's hardware is fine, they already have their toes in the GPU server market, and compute is a lot easier to optimize for than games. If AMD plays their cards right, I believe they can compete with Nvidia faster than Intel can compete in the gaming market. Key word is "can" though - if they truly want to succeed, they will need to adopt CUDA. That's the only way they can convince people to switch. Even if their performance is worse, being compatible is more important. Intel never really cared that much about compute because their GPUs were too weak to be worthwhile. Their OpenCL performance was fine for what the chips were, but still not noteworthy. They're now in the same camp as AMD, and it seems like Intel is making an effort to improve non-CUDA compute.
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
@schmidtbag AMD server cards are found in some of the most powerful supercomputers in the world, so obviously they can't suffer from poor optimisation. I was just referring to the consumer cards and what was going on in this particular article.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Kaarme:

@schmidtbag AMD server cards are found in some of the most powerful supercomputers in the world, so obviously they can't suffer from poor optimisation. I was just referring to the consumer cards and what was going on in this particular article.
Right but my point was with compute, there isn't much to optimize regardless of which market you're talking about. In a lot of the consumer space where Nvidia pulls ahead, it's either because they're using CUDA or some other thing (like OptiX), but when it comes to an apples to apples comparison (where both are using the same API), there is no clear winner. Sometimes even for the same program but a different workload, one brand will do better than the other. As far as I understand, the differences come down to the hardware, rather than the drivers.
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
schmidtbag:

Right but my point was with compute, there isn't much to optimize regardless of which market you're talking about. In a lot of the consumer space where Nvidia pulls ahead, it's either because they're using CUDA or some other thing (like OptiX), but when it comes to an apples to apples comparison (where both are using the same API), there is no clear winner. Sometimes even for the same program but a different workload, one brand will do better than the other. As far as I understand, the differences come down to the hardware, rather than the drivers.
Can 3060 Ti beat 6800 in raw power in Vulkan and barely lose in OpenCl? That seems unlikely. That's why I was reminded of having seen such indications in the past, that AMD cards don't pull their weight in these applications, due to poor driver support.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Kaarme:

Can 3060 Ti beat 6800 in raw power in Vulkan and barely lose in OpenCl? That seems unlikely. That's why I was reminded of having seen such indications in the past, that AMD cards don't pull their weight in these applications, due to poor driver support.
AMD has some catching up to do with Vulkan but like I said, performance varies drastically with OpenCL. A single benchmark, including a real-world use case, is inconclusive. Again, even the same program with different workloads will yield different winners, but also, performance can vary substantially between performance tiers or pro-grade models. I'm sure you're thinking "yeah obviously a lower-end GPU will perform worse" but perhaps not in the way you're thinking. The half-precision and double-precision floats can vary across different performance tiers, or in some cases, a GPU could be could be entirely incompatible with them (even for workstation models). In some cases, a mid-tier workstation GPU could run laps around a flagship desktop GPU in workloads heavy with FP64, while in gaming it would be the complete opposite. That's actually where Titan cards became a problem, because they had much better FP64 than the other GeForce cards but a small fraction of the cost compared to Quadros. Here's where things get a bit interesting today: If you go with an RX 6000 GPU, you will get a 2:1 ratio for FP16 and a 1:16 for FP64. With CDNA2, it peaks at 8:1 for FP16 and 1:1 with FP64, though some of the lower-end models only offer 2:1 for FP16 and 1:2 for FP64. If you go with a RTX 3000, you get a 1:1 with FP16 and a miserable 1:64 for FP64. If you go with the A100 40GB, you'll get a 4:1 FP16 and 1:2 for FP64. Here's what you can take away from that: 1. Nvidia really doesn't want you using desktop GPUs for workstation tasks. 2. Since AMD and Nvidia trade blows in various OpenCL workloads, the GeForce's crap FP16 and FP64 performance implies their FP32 design is better. 3. If you're on a budget and don't need any of Nvidia's technologies, AMD is the obvious choice.
https://forums.guru3d.com/data/avatars/m/273/273955.jpg
I like to see Intel Arc OC.. i think intel ARC A770 will perform better then 3070 card with oc
https://forums.guru3d.com/data/avatars/m/229/229075.jpg
I'm not so certain these can be trusted. If this is to be perceived as their "true potential", then the 3060 should trade blows with the 6700XT at the highest level of optimization and it doesn't on any level.
https://forums.guru3d.com/data/avatars/m/263/263710.jpg
Still no DX numbers?
https://forums.guru3d.com/data/avatars/m/268/268248.jpg
Imglidinhere:

I'm not so certain these can be trusted. If this is to be perceived as their "true potential", then the 3060 should trade blows with the 6700XT at the highest level of optimization and it doesn't on any level.
Hehe those cards are a bit of wildcards with how new their drivers are , they might end up famous for Intel fine wine ..... Oooor Intel abandon em early or never improve the drivers good enough so they get the fine milk aging rep .... We will see eventually!
https://forums.guru3d.com/data/avatars/m/256/256969.jpg
Catching up with Intel news regarding A770, it seems really interesting products ! Already competing with RTX3060, at 100$ less, that's fantastic.