GPU Compute render perf review with 20 GPUs

Graphics cards 1049 Page 1 of 1 Published by

Click here to post a comment for GPU Compute render perf review with 20 GPUs on our message forum
https://forums.guru3d.com/data/avatars/m/181/181063.jpg
People do need to understand that in the world of compute rendering, nothing is as simple as it seems.
I disagree partially... - one thing is very simple: letting aside price, Ray tracing support in the industry, etc RTX 2080Ti is still the king of the hill now - will big navi challenge that? - we'll see.
https://forums.guru3d.com/data/avatars/m/16/16662.jpg
Administrator
barbacot:

I disagree partially... - one thing is very simple: letting aside price, support, etc RTX 2080Ti is the king of the hill now - will big navi challenge that? - we'll see.
I did mean that seen from a broad perspective. VRAY only supports CUDA, no OpenCL. So if you planned to run 3Ds Max with VRAY and have a Radeon graphics card, you simpy can't use it. Then Blender offers CUDA for GeForce GTX, and then OptiX for RTX, but not OpenCL for either .. however, OpenCL is the path to use on blender for AMD Radeon cards. Regardless of it all, anyone is going to select the fastest API available to them.
https://forums.guru3d.com/data/avatars/m/181/181063.jpg
From my point of view it is really a shame that AMD does not adopt CUDA - Yes, Nvidia “owns” and controls the future of CUDA, so it’s not open in the “open source” definition, but it’s certainly free. AMD could develop CUDA enabled drivers when they want and giving the widespread adoption of this technology in high performance computing it would be a gain for everybody. We at wok use only nvidia cards because we use cuda optimized software in our research so the choice (if any) is simple - amd should think that there is a profit to be made from here also even if it is not their proprietary technology.
https://forums.guru3d.com/data/avatars/m/275/275639.jpg
Great article and very important topic, thank you! GPU rendering is a game changer. Instead of investing in the best CPU, you can buy GPU and add more of them later, rather than building whole new workstation. I miss RTX 2070 (non super) in the review... would you add it, so the list will be complete? Small comment on the best value for money being RTX 2060 super. RTX 2070 super has enabled NVLINK (unlike non-super and 2060), which allows you to share/duble memory. This is very important for rendering more complex scenes...
https://forums.guru3d.com/data/avatars/m/16/16662.jpg
Administrator
Pepehl:

RTX 2070 super has enabled NVLINK (unlike non-super and 2060), which allows you to share/duble memory. This is very important for rendering more complex scenes...
A very valid point yes, thanks for bringing that to my attention. If I can find some time I'll add a regular RTX 2070 as well.
https://forums.guru3d.com/data/avatars/m/262/262208.jpg
Hi @Hilbert Hagedoorn Great article again mate and thanks for doing that Although I would like include other render engines like is LuxRender, AMD Pro Render, Arnold Renderer and Redshift and new kid on block Fstorm Indigo not sure I have never used and probably will never use because what I remember its slow GPU if its game changer, hard to say for many people it is and offers faster render times and OptiX literally can halve render times, but this again depends on GPUs used and optimization of scene, in my case I do high poly scenes and VRAM usage can be there issue, doesn't matter which renderer I use if its Blender Cycles or E-Cycles, Octane or IRAY or Poser SuperFly(which is based on Blender Cycles) or AMD ProRender I still running to issues with VRAM which you can't see as much with normal CPU based renderers like is Corona and maybe due this you see many movies and many VFX companies use still Arnold Renderer which is industry standard In many cases I would like to having GPUs with at least 32GB of VRAM which would help at least And choosing right renderer too depends on more factors, for archviz Corona and V-RAY are golden standards although Blender Cycles is close but as everything depends on textures and modelling skills etc, Octane I tried several times for archviz and renders never liked them, they never looked as good as Corona, V-RAY or Fstorm This sharing memory, its only available with some renderers like Octane or Redshift and V-RAY not tried that with Blender and other renderers Hope this helps Thanks, Jura
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
Interesting that AMD cards are really good for mining crypto, but not for boosting rendering speed. Not all GPU compute is the same, clearly.
data/avatar/default/avatar03.webp
Finally #RTXOn 😀
data/avatar/default/avatar05.webp
Pepehl:

Great article and very important topic, thank you! GPU rendering is a game changer. Instead of investing in the best CPU, you can buy GPU and add more of them later, rather than building whole new workstation. I miss RTX 2070 (non super) in the review... would you add it, so the list will be complete? Small comment on the best value for money being RTX 2060 super. RTX 2070 super has enabled NVLINK (unlike non-super and 2060), which allows you to share/duble memory. This is very important for rendering more complex scenes...
Are you shure nvlink is working like Quadro with share cuda cores and memory, and not only normal sli (higher bandwidth) with 2070s, 2080 and 2080ti?
https://forums.guru3d.com/data/avatars/m/275/275639.jpg
nizzen:

Are you shure nvlink is working like Quadro with share cuda cores and memory, and not only normal sli (higher bandwidth) with 2070s, 2080 and 2080ti?
As far as I know NVlink on "gaming" RTX cards has smaller bandwith than on Quadro, but it does share memory between two cards (Quadro can share up to 3 cards, if i remember correctly). Taken from Chaosgroup article: "...new RTX cards also support NVLink, which gives V-Ray GPU the ability to share the memory between two GPUs..." Acording to their tests it had some impact on rendering speed (with NVlink a bit slower then without), but it enabled them to render some scenes which needed more memory. More info is here: https://www.chaosgroup.com/blog/profiling-the-nvidia-rtx-cards
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
barbacot:

From my point of view it is really a shame that AMD does not adopt CUDA - Yes, Nvidia “owns” and controls the future of CUDA, so it’s not open in the “open source” definition, but it’s certainly free. AMD could develop CUDA enabled drivers when they want and giving the widespread adoption of this technology in high performance computing it would be a gain for everybody. We at wok use only nvidia cards because we use cuda optimized software in our research so the choice (if any) is simple - amd should think that there is a profit to be made from here also even if it is not their proprietary technology.
AMD wouldn't really benefit a whole lot in supporting CUDA. For one thing, Nvidia designed CUDA for their architecture. It's fine-tuned to a point AMD doesn't have a chance to compete with (AMD is struggling enough with DX, OpenGL, Vulkan, and OpenCL drivers as it is). Except for the few cases where people at home have an AMD GPU and want to run a CUDA-based application, I'm sure AMD will always be a worse choice when it comes to CUDA, simply because it will never be as refined or purpose-built. AMD would just be making themselves look worse by supporting it. Also, any research teams or corporations who opted for CUDA for in-house software deserves to be trapped in Nvidia's ecosystem. You aren't forced to use CUDA; OpenCL and Vulkan/SPIR-V are options on Nvidia too. CUDA isn't inherently better, it's just a more ideal choice because it's easier to develop in, thanks to Nvidia's abundant and actually helpful resources. Also, there are translation layers to run CUDA code on non-CUDA hardware. There is some additional overhead, but like I said before, you're not going to outperform Nvidia on CUDA code anyway. EDIT: Believe me, the newest Nvidia GPU I have is Kepler based and there is software I've wanted to use that depends on CUDA. But I don't think AMD should be responsible for making CUDA drivers. If developers really want their software to be widely adopted, they shouldn't use CUDA. If developers want flexibility, they shouldn't use CUDA.
data/avatar/default/avatar29.webp
Pepehl:

As far as I know NVlink on "gaming" RTX cards has smaller bandwith than on Quadro, but it does share memory between two cards (Quadro can share up to 3 cards, if i remember correctly). Taken from Chaosgroup article: "...new RTX cards also support NVLink, which gives V-Ray GPU the ability to share the memory between two GPUs..." Acording to their tests it had some impact on rendering speed (with NVlink a bit slower then without), but it enabled them to render some scenes which needed more memory. More info is here: https://www.chaosgroup.com/blog/profiling-the-nvidia-rtx-cards
Nice! This is great news! Thanx for the link 🙂 Too bad there is no "sharing" for nvlink sli in games. Aka two cards working as one unit. Double the cudacores and double the vram.
https://forums.guru3d.com/data/avatars/m/247/247876.jpg
@Hilbert Hagedoorn A typo (I guess)
CUDA, however, is a closed API only to be used with GeForce graphics cards as NVIDIA believes they can get 'closer' to the hardware with their own APU, and thus squeeze out more performance.
The abbreviation marked with bold font should be "API".
data/avatar/default/avatar32.webp
Kaarme:

Interesting that AMD cards are really good for mining crypto, but not for boosting rendering speed. Not all GPU compute is the same, clearly.
I guess that like in (some) gaming, gpus need faster clock speeds for I/O between gpu and cpu or something...
https://forums.guru3d.com/data/avatars/m/263/263710.jpg
mbk1969:

@Hilbert Hagedoorn A typo (I guess) The abbreviation marked with bold font should be "API".
i also get 😡 when someone writes internet without "I" capital or hardware/software with "s" in plural form. The problem , i think 😉, is with me, as i've learnt a lot during my oracle database module (~20 years ago)..... how to use syllables when creating a 'table' .... and so.... :):):):):):):):):)
https://forums.guru3d.com/data/avatars/m/277/277212.jpg
barbacot:

From my point of view it is really a shame that AMD does not adopt CUDA - Yes, Nvidia “owns” and controls the future of CUDA, so it’s not open in the “open source” definition, but it’s certainly free. AMD could develop CUDA enabled drivers when they want and giving the widespread adoption of this technology in high performance computing it would be a gain for everybody. We at wok use only nvidia cards because we use cuda optimized software in our research so the choice (if any) is simple - amd should think that there is a profit to be made from here also even if it is not their proprietary technology.
The reason AMD has not adopted CUDA is because they can't. As you wrote, Nvidia owns it and no one else is allowed to adopt it because, as of right now, APIs can be copyrighted and CUDA's API is. There is a court case in progress between Oracle and Google that will decide the future of this situation. If Google wins and APIs are not allowed to be copyrighted then nothing can stop AMD from adopting CUDA other than a willingness to do so. For all I know, AMD could have their driver team working on CUDA support right now in hopes that Google will win. FWIW, the vast majority of industry opinions submitted to the court so far support Google. One more thing - I can understand why Nvidia would not want to license CUDA. They have a dominate position in data center GPUs and those are very high margin - the V100 costs around 9K. Opening CUDA for use by others could cut into that position.
data/avatar/default/avatar03.webp
Hi Hilbert. Just thought I'd ask. Does it make a difference if COMPUTE is selected in the AMD drivers.
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
Gomez Addams:

One more thing - I can understand why Nvidia would not want to license CUDA. They have a dominate position in data center GPUs and those are very high margin - the V100 costs around 9K. Opening CUDA for use by others could cut into that position.
Strangely enough Nvidia has no problems using HBM in those expensive V100 cards, despite HBM being a project AMD launched years ago. Now Nvidia even allows GeForce gamers to tap into the vast pool of Freesync screens, which only used to exist because of AMD, though now there would be new generic adaptive sync screens as well. Conversely it would make sense Nvidia would allow AMD to put the Cuda API to use. It's like Jensen can find one sleeve of his Leather Jacket® but not the other.
https://forums.guru3d.com/data/avatars/m/16/16662.jpg
Administrator
xg-ei8ht:

Does it make a difference if COMPUTE is selected in the AMD drivers.
Unless I am completely overlooking it (and please do correct me if I am wrong), that setting is no longer present in the 2020 drivers.
https://forums.guru3d.com/data/avatars/m/271/271877.jpg
Kaarme:

Interesting that AMD cards are really good for mining crypto, but not for boosting rendering speed. Not all GPU compute is the same, clearly.
People talking about this at Indigo Renderer forums, a developer said: "Raw flops as a measurement is almost irrelevant. Data has to be fed to the arithmetic units. This has been a major problem for many years and is the limiting factor in most cases." https://www.indigorenderer.com/forum/viewtopic.php?f=1&t=14138&sid=eb3e0a25eab979b5504130a841ed1032&start=45 It's a vague explanation of the issue, but it could be related to why Radeons are always sometimes slower despite having higher compute power... (Edited)