GPU Compute render perf review with 20 GPUs

Graphics cards 1049 Page 1 of 1 Published by

Click here to post a comment for GPU Compute render perf review with 20 GPUs on our message forum
https://forums.guru3d.com/data/avatars/m/262/262208.jpg
nizzen:

Are you shure nvlink is working like Quadro with share cuda cores and memory, and not only normal sli (higher bandwidth) with 2070s, 2080 and 2080ti?
In theory you can pool or share VRAM in OpenCL 2.0 renderers or in some renderers you can use Out of core memory which is helpful More GPUs you have, faster render times will be, due this I'm running 4*GPUs and if my Zotac RTX 2080Ti AMP doesn't sell then I will cannibalise that card to my loop hahaha, hate have something unused Have look in rendering adding extra GPU will speed up the renders for sure Hope this helps Thanks, Jura
https://forums.guru3d.com/data/avatars/m/262/262208.jpg
Kaarme:

Interesting that AMD cards are really good for mining crypto, but not for boosting rendering speed. Not all GPU compute is the same, clearly.
This again depends, in some renderers AMD GPUs are comparable with similar Nvidia gaming counterpart like is in LuxRender or AMD ProRender Hope this helps Thanks, Jura
data/avatar/default/avatar13.webp
Thanks HH! It's a pleasant change seeing a review devoted to professional apps and the GPU benchmark results. I would expect some improvement coming with the Ampere node change.
data/avatar/default/avatar23.webp
Wow, you actually managed to find OpenCL application where NVIDIA is competitive, that's surprising.
Kaarme:

Interesting that AMD cards are really good for mining crypto, but not for boosting rendering speed. Not all GPU compute is the same, clearly.
Of course not all GPU Compute is the same, but not all GPU Compute accelerated rendering is the same either. Check for example LuxMark (based on LuxRender), where AMD is doing just fine
https://forums.guru3d.com/data/avatars/m/235/235224.jpg
#RTXOn 😀
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
Gomez Addams:

As you wrote, Nvidia owns it and no one else is allowed to adopt
This is incorrect, Nvidia have on multi occasions offered to work with AMD to run cuda on AMD hardware, and AMD have an inhouse tool for converting CUDA applications.
https://forums.guru3d.com/data/avatars/m/224/224952.jpg
There is no link to this forum thread from the article, fyi.
https://forums.guru3d.com/data/avatars/m/183/183421.jpg
Hilbert Hagedoorn:

Unless I am completely overlooking it (and please do correct me if I am wrong), that setting is no longer present in the 2020 drivers.
Hilbert if you click the cog on the top right and choose Graphics under the first list you'll see an Advanced drop down click that and it's the second from the bottom choice
data/avatar/default/avatar12.webp
Kaarme:

Interesting that AMD cards are really good for mining crypto, but not for boosting rendering speed. Not all GPU compute is the same, clearly.
It's not true. Nividia cards are considerably faster and more efficient in more than 90 percent of mining algorithms. AMD cards are usually used for mining on Ethash but Ethash performance is not limited by GPU compute performance, it's mostly dependent on memory bandwidths which AMD cards are usually good at.
https://forums.guru3d.com/data/avatars/m/16/16662.jpg
Administrator
Athlonite:

Hilbert if you click the cog on the top right and choose Graphics under the first list you'll see an Advanced drop down click that and it's the second from the bottom choice
No Sir, it isn't .... it might have become an architecture dependant setting though, so I'll look some more with another architecture, this is NAVI.
7685.png
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
Hilbert Hagedoorn:

Unless I am completely overlooking it (and please do correct me if I am wrong), that setting is no longer present in the 2020 drivers.
The setting is probably specific to Vega.
data/avatar/default/avatar31.webp
Would've been nice to see some CPU thrown in so we can compare how much better GPUs are vs CPUs.
data/avatar/default/avatar13.webp
I know it's different but I transcoded a 3hr movie in H.264 4K HDR into H.265 once with only the cpu and then with NVENC on a 1080ti and the fps pretty much doubled would be great to have 1 "gaming" and 1 "hedt" cpus just to see the giant trench between gpu and cpu processing
https://forums.guru3d.com/data/avatars/m/262/262208.jpg
cpy2:

Would've been nice to see some CPU thrown in so we can compare how much better GPUs are vs CPUs.
Hi there Agree it would be worthwhile to test CPU vs GPU in rendering as most of these renderers do offer CPU only mode 3990x would be very close to RTX 2080Ti I would suspect in some renderers and in Blender I think 3990X would be slightly faster than RTX 2080Ti then add GPU to mix and you have one of hell render Workstation Hope this helps Thanks, Jura
https://forums.guru3d.com/data/avatars/m/262/262208.jpg
kakiharaFRS:

I know it's different but I transcoded a 3hr movie in H.264 4K HDR into H.265 once with only the cpu and then with NVENC on a 1080ti and the fps pretty much doubled would be great to have 1 "gaming" and 1 "hedt" cpus just to see the giant trench between gpu and cpu processing
Hi there Not sure if would compare CPU vs GPU in video editing or video rendering I think Linus and few others done such tests while back Hope this helps Thanks, Jura
https://forums.guru3d.com/data/avatars/m/220/220214.jpg
From those results it appears to me the 2070 Super is the clear winner if you want to render using Blender. It is only 20% slower than faster 2080 Ti using the Optix API - and you can get it for 45% of price! Cheapest 2080Ti is about £950, cheapest 2070 Super is £450... most expensive, fastest 2070 Super is still only £582 FFS! Also an even closer 2080 Super or 2080 is still only the £650 area. 2080 ti should definitely be faster using Optix - it has 140% number of cores of 2080 Super and only 112% performance difference. Where did it lose the 28% difference?? Against 2070 Super it has 170% number of cores and only 120% difference in speed.
https://forums.guru3d.com/data/avatars/m/224/224952.jpg
geogan:

From those results it appears to me the 2070 Super is the clear winner if you want to render using Blender. It is only 120% slower than faster 2080 Ti using the Optix API - and you can get it for 45% of price!
100% slower would be zero performance. Not sure what to make of 120% less performance, does it undo work?
https://forums.guru3d.com/data/avatars/m/220/220214.jpg
Mufflore:

100% slower would be zero performance. Not sure what to make of 120% less performance, does it undo work?
The 2070 Super is 120% while the 2080 Ti is 100% relative speed. It's obvious what I was talking about. So maybe its 20% slower then.
https://forums.guru3d.com/data/avatars/m/280/280315.jpg
Why no Quadro's?
https://forums.guru3d.com/data/avatars/m/196/196284.jpg
Kaarme:

Strangely enough Nvidia has no problems using HBM in those expensive V100 cards, despite HBM being a project AMD launched years ago. Now Nvidia even allows GeForce gamers to tap into the vast pool of Freesync screens, which only used to exist because of AMD, though now there would be new generic adaptive sync screens as well. Conversely it would make sense Nvidia would allow AMD to put the Cuda API to use. It's like Jensen can find one sleeve of his Leather Jacket® but not the other.
HBM isn't owned by AMD. It was developed with SK Hynix. AMD has no control over who uses it. FreeSync is just the name AMD gave to their implementation of Variable Refresh Rate. VESA calls it Adaptive-Sync. Adaptive-Sync was adopted by VESA and added to the DP1.2a standard. If you want to get technicial, NVidia doesn't support AMD's FreeSync....nor any FreeSync monitor. NVidia is supporting VESA's DP1.2a interface standard. AMD doesn't benefit directly from Adaptive-Sync. Conversely, NVidia does benefit directly from CUDA.