Gigabyte GeForce RTX 2080 GAMING OC 8G review

Graphics cards 1049 Page 1 of 1 Published by

Click here to post a comment for Gigabyte GeForce RTX 2080 GAMING OC 8G review on our message forum
data/avatar/default/avatar28.webp
If it's of any use to anyone, my EVGA 1080 ti SC2 stock settings (and running on x8 slot), does VRAY benchmark in 66s.
https://forums.guru3d.com/data/avatars/m/274/274006.jpg
Considering the following, I'm pretty impressed that the 2080 can trade blows with the Titan Xp... Titan Xp vs RTX 2080 Transistor count 12 Billion vs 13.6 Billion CUDA Cores 3840 vs 2944 ROPs 96 vs 64 Memory Size 12 GB vs 8GB Memory Bus 384-bit vs 256-bit Memory Bandwidth 547 GB/s vs 448 GB/s FP Performance 12.0 TFLOPS vs 10 TFLOPS TDP 250 Watts vs 215 Watts Launch MSRP $1200 vs $ 799
data/avatar/default/avatar31.webp
SniperX:

Considering the following, I'm pretty impressed that the 2080 can trade blows with the Titan Xp... Titan Xp vs RTX 2080 Transistor count 12 Billion vs 13.6 Billion CUDA Cores 3840 vs 2944 ROPs 96 vs 64 Memory Size 12 GB vs 8GB Memory Bus 384-bit vs 256-bit Memory Bandwidth 547 GB/s vs 448 GB/s FP Performance 12.0 TFLOPS vs 10 TFLOPS TDP 250 Watts vs 215 Watts Launch MSRP $1200 vs $ 799
Except that nobody in their right mind compares this card to the Titan Xp - which was hilariously bad value at that price - but to the 1080ti.
data/avatar/default/avatar39.webp
Caesar:

The real question is why to buy!? .............PLEASE DO NOT ANSWER FOR : RAY TRACING...............o_O https://i.imgur.com/JRO5Q7R.gif
i think its been pretty clear so far, most people agree that there is no reason to buy if you're on a 10xx card already unless you want raytracing.
https://forums.guru3d.com/data/avatars/m/274/274006.jpg
buhehe:

Except that nobody in their right mind compares this card to the Titan Xp - which was hilariously bad value at that price - but to the 1080ti.
You're missing the point
https://forums.guru3d.com/data/avatars/m/256/256969.jpg
@Hilbert Hagedoorn Can you run/record the Star Wars and Futuremark raytracing demo with a RTX 2080 as comparison please? Want some ideas of lower RT core count and how viable other card than Ti are. Glad to see the Gigabyte 3 fans cooling performing well at full load, i bought the Windforce 2080-ti. Nvidia is really trading blows with AIB partners... Won't end well for them at this pace.
https://forums.guru3d.com/data/avatars/m/262/262208.jpg
dmity84:

If it's of any use to anyone, my EVGA 1080 ti SC2 stock settings (and running on x8 slot), does VRAY benchmark in 66s.
Hi there My EVGA GTX1080Ti FE with 2113MHz do this V-RAY benchmark in 62 seconds, difference between the OC and normal stock clocks is 4-6 seconds, with stock clocks at 1911MHz time is 66-68s and with 2113MHz OC time is 62s Difference between 8x and 16x is not so big there in rendering, have run same setup 3*GPUs with 5820k and now with 5960x and difference in GPU based renderers is minimal or in some cases is zero Yours V-RAY result is pretty good there, what clocks are you running at stock? My 1080Ti at stock clocks is running 1911MHz Hope this helps Thanks, Jura
data/avatar/default/avatar12.webp
jura11:

Hi there My EVGA GTX1080Ti FE with 2113MHz do this V-RAY benchmark in 62 seconds, difference between the OC and normal stock clocks is 4-6 seconds, with stock clocks at 1911MHz time is 66-68s and with 2113MHz OC time is 62s Difference between 8x and 16x is not so big there in rendering, have run same setup 3*GPUs with 5820k and now with 5960x and difference in GPU based renderers is minimal or in some cases is zero Yours V-RAY result is pretty good there, what clocks are you running at stock? My 1080Ti at stock clocks is running 1911MHz Hope this helps Thanks, Jura
Hi, i see, i thought x8 would have some effect here, it does a little in 3dmark and gaming, but it's not that bad. Stock core boosts to 1898 or maybe 1911 max at the beginning, memory is at 5005 i think according to Precision XOC. With +55 core and +530 memory, it was 62 seconds. I probably lost the silicone lottery as it won't go over 50mhz clock in 3dmark the max i get with boost is 1974mhz.
data/avatar/default/avatar16.webp
Thanks for the review. Can you please explain us your choice of V-Ray benchmark over let's say Blender GPU benchmark? V-Ray benchmark is know that it doesn't work on AMD. When you go to their benchmark list, you will not find any Radeons or Vega dedicated graphic cards.Only Nvidia or AMD CPU/APUs or Intel CPUs. Sounds very fishy to me. Blender is open source, and both Nvidia and AMD team are working on it to optimize their products to the best that they can, so it fair ground for comparison. If you go to http://download.blender.org/institute/benchmark/latest_snapshot.html you can see their latest benchmark results.
https://forums.guru3d.com/data/avatars/m/16/16662.jpg
Administrator
DSC2037:

Can you please explain us your choice of V-Ray benchmark over let's say Blender GPU benchmark? V-Ray benchmark is know that it doesn't work on AMD. When you go to their benchmark list, you will not find any Radeons or Vega dedicated graphic cards.Only Nvidia or AMD CPU/APUs or Intel CPUs. Sounds very fishy to me.
Hi DSC2037, It is based on user request here in the forums that we added it. One person wants this, the other that. I cannot do it all, so simply picked V-Ray based on an audience request. Also, GPGPU is of limited interest to this reader base in generic ergo I do not want to spend heaps of time on testing it. I had a discussion yesterday on the software as I was uncertain what card uses what render path in v-ray. As it seems NV cards will use Cuda whenever available, otherwise drops back to OpenCL. I learned that OpenCL is the standard code path for AMD Radeon cards (which are still lacking at this time for reasons of time). So including any measurements (even bad) actually would be a good thing to include, so that AMD can put more work into that as trust me, when we post it, they will notice it and look into it. I've been made aware of Blender, but still need to look into it.
https://forums.guru3d.com/data/avatars/m/262/262208.jpg
DSC2037:

Thanks for the review. Can you please explain us your choice of V-Ray benchmark over let's say Blender GPU benchmark? V-Ray benchmark is know that it doesn't work on AMD. When you go to their benchmark list, you will not find any Radeons or Vega dedicated graphic cards.Only Nvidia or AMD CPU/APUs or Intel CPUs. Sounds very fishy to me. Blender is open source, and both Nvidia and AMD team are working on it to optimize their products to the best that they can, so it fair ground for comparison. If you go to http://download.blender.org/institute/benchmark/latest_snapshot.html you can see their latest benchmark results.
Hi there Not sure if V-RAY benchmark does use OpenCL, I know their render engine RT does use OpenCL and therefore you can use AMD or any GPU which supports OpenCL, don't have at home any AMD GPU which I could test it in V-RAY etc V-RAY benchmark is great for testing multi core/threaded CPU like is ThreadRipper or Intel counterparts as this CPU can use all available cores/threads etc plus with CPU rendering you are not limited by VRAM and CUDA cores but yours RAM and CPU V-RAY is still most popular renderer for archviz when it comes to CPU rendering, Corona or AMD ProRender catching up quite quickly in therm of render speeds and quality plus they're cheaper or free Regarding the Blender, OpenCL is usually slower than Nvidia CUDA plus there are few limitations of use of OpenCL, if its fair ground to compare both AMD vs Nvidia in Blender hard to say there, I use Blender but mostly with Cycles and CUDA and used in past with AMD OpenCL and used Nvidia with OpenCL renderers as well LuxMark has been mostly used for testing or benchmarking the GPU performance in rendering but now are more and more used CUDA based renderers like is Octane, Maxwell, IRAY, SuperFly etc Plus Nvidia OpenCL is not the best if you compare GTX1080 vs Vega64 in LuxMark or other OpenCL renderers Few years back I used only AMD GPUs for OpenCL rendering or works as their performance has been and still is best when it comes to OpenCL, newer Turing generation of GPUs have bit better OpenCL performance but still AMD didn't released any new GPU this year and Vega64 is bit old GPU for comparison with Nvidia RTX range Hope this helps Thanks, Jura
https://forums.guru3d.com/data/avatars/m/262/262208.jpg
Hilbert Hagedoorn:

Hi DSC2037, It is based on user request here in the forums that we added it. One person wants this, the other that. I cannot do it all, so simply picked V-Ray based on an audience request. Also, GPGPU is of limited interest to this reader base in generic ergo I do not want to spend heaps of time on testing it. I had a discussion yesterday on the software as I was uncertain what card uses what render path in v-ray. As it seems NV cards will use Cuda whenever available, otherwise drops back to OpenCL. I learned that OpenCL is the standard code path for AMD Radeon cards (which are still lacking at this time for reasons of time). So including any measurements (even bad) actually would be a good thing to include, so that AMD can put more work into that as trust me, when we post it, they will notice it and look into it. I've been made aware of Blender, but still need to look into it.
Hi @Hilbert Hagedoorn Corona and V-RAY are standards in archviz rendering and they're more and more used in movies and other stuff, agree with you Guru3D readers have limited "desire" or interest in GPGPU benchmarks, but still good to have included in yours benchmarks as both can be used with CPU and both can utilise all cores/threads of ThreadRipper or Intel counterparts In theory you can try disable CUDA in Nvidia Control panel and test there if V-RAY benchmark can or would start with OpenCL fallback Yup, AMD does use OpenCL as there is no CUDA cross compiler which would allow run AMD with CUDA apps, this AMD CUDA cross compiler Otoy promised to bring but looks like they will be not bringing this anytime soon Regarding the OpenCL performance, you can try for fun LuxMark and GTX1080 vs Vega64 and you will find, Vega64 is bit faster in LuxMark, not sure how fast RTX is fast there, but I would suspect they would lot faster than older Pascal or Maxwell generation Blender I would include as V-RAY or Corona will and would use all available cores/threads in rendering or benchmark plus you test there like CUDA or OpenCL performance or you can use AMD ProRender which is OpenCL renderer and you don't need to switch anything etc Blender is more and more used in game development or archviz and general rendering just due its free and offers great features which you can find in 3DS Max or Cinema4D which cost crazy money Hope this helps Thanks, Jura
https://forums.guru3d.com/data/avatars/m/274/274577.jpg
that funny rtx= rushed to xpensive 🙂