GeForce RTX 3090 Benchmarks Surface Online

Published by

Click here to post a comment for GeForce RTX 3090 Benchmarks Surface Online on our message forum
https://forums.guru3d.com/data/avatars/m/220/220214.jpg
It's funny that people don't believe it is only 10-20% faster, given benchmarks and hardware facts about numbers of shaders/ROPs. Don't understand how or why anyone thinks it could be any faster. And no way that an extra 14GB of VRAM costs 850 (1500-650) so don't anyone say the price is justified because of extra VRAM.
data/avatar/default/avatar39.webp
angelgraves13:

Very few DLSS supported games thus far. We'll see what happens.
Dlss looks like shit anyways, nothing like the native image - very clearly upscaled with this oversharpened look to try and compensate for blurry image you get from upscaling... 1440p upscaled to 8k would look like... 1440p with a sharpening filter.
data/avatar/default/avatar37.webp
ITGuru:

This gives me more reason not to prostitute my 2080 TI but calmly wait for the 3080 TI variant to come out, and then carefully consider upgrade, forget the hype......use logic, facts and critical thinking. I advise existing owners of 2080 TI to do the same, remember how much you paid for your cards before you make a decision to sell them and upgrade. Might even snag another 2080 ti and SLI them, considering that you can't do that with 3080's.
There won't be a 3080Ti if 3080 to 3090 is less than 20% gap. Neither 3080S. The die we know already is the full fat one and chopped for the 3080 because of the yield issues. (too big physically). 2021 is the new MCM chiplet architecture from both companies.
data/avatar/default/avatar32.webp
fry178:

@David3k just because the 3090 is using the full chip, doesnt automatically mean it eliminates a ti. especially since we dont know how many chip are "defective" (as in not full chip). @Loobyluggs you dont need the connectot for sli, as its now gonna be done in-game, so not sure why you state ppl have to buy a 3090 to do "sli".
If there is not a big performance gap like 40% between 3080 to 3090 there won't be a 3080Ti while we already we know there are 2 3080 models with different VRAM. As for the physical die size, Nvidia cannot chop it as they see fit. Has to be done in certain way to work. Regarding the DX12/Vulkan Async Compute for mGPU wasn't some in here saying it was pointless when Nvidia couldn't scale over 50% while AMD could do 100% scaling? At least until people bothered to run such benchmarks 2 years ago with Vega 64 and 1080Ti. How many games have we seen with Async Compute all those years? Barely a handful and none the last 2 years. (Last one was Resident Evil remastered). Nvidia just drops the support, like it did with the Nvision kit. And what happened to PhysX/Gameworks? When was the last time we saw a big game supporting it and not some small studio making a C-rated game just to use the Nvidia grand?
data/avatar/default/avatar31.webp
How is in average 19.8% faster when the fastest percentage is 11.5% ? You meant 9.8?
data/avatar/default/avatar29.webp
How did this leaker's 3080 numbers compared to actual tested numbers? We can probably take a truck load of salt on this until Hilbert or other validated sites post the real numbers. Maybe its real or maybe Nvidia released a gimped driver to keep their publishing dates and will release the real public driver a few days before the NDA lifts. I would wait before confirming the sky is falling. No matter how expensive Nvidia is, they do deliver on the performance.
data/avatar/default/avatar30.webp
If you cound Mhz and shader count, you get to a 17% extra power between 3080 and 3090 if you take in account that at 4k we see up to 11.5% perf improvement, i will say we are on point. Everyone knew from the start the 3080 was the big deal and even a 3080ti can offer you this 10% perf extra if you have the extra money and if you miss that 5fps to get 60fps. You should be happy and not disappointed that the reference card for gaming this year is either 500$ or 700$ instead of complaining that the 1300$ one is not fast enough.
https://forums.guru3d.com/data/avatars/m/101/101279.jpg
I'll wait to see Hilbert's benchmarks before I even consider making my mind up. These benches could be pure fantasy as far as we know. I'll stick with a bencher I trust.
https://forums.guru3d.com/data/avatars/m/197/197287.jpg
David3k:

Why the hostility? I'm just stating what is apparent.
No hostility, just reality. You're stating a company who gets to label their products and claim what it is, are wrong, about their own.. products...
https://forums.guru3d.com/data/avatars/m/250/250418.jpg
With these prices and performance, how badly can AMD fuck up not to make a win here?
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
David3k:

Actually, the 3090 IS the Ti card, not a Titan replacement as nvidia claims. Not sure if they're still planning on releasing a Titan card, but if they do I suspect that it would be based on GA 100, and that it would be priced at 2k USD. Just an educated guess.
its not a ti
https://forums.guru3d.com/data/avatars/m/232/232349.jpg
Loobyluggs:

Hey everyone! Remember when nvidia decided to get rid of SLI ? this will not end well.
They didn't get rid of it. They just left it up to developers to implement it in their games from the beginning of development.
https://forums.guru3d.com/data/avatars/m/232/232349.jpg
All I can say is I can not understand why people bought a 10gb card today!?! It is already obsolete even at 1080 in some games. I use more than the 8Gb of RAM on my 5700XT in a few games today and never would I think that's going to go the way of less usage in the near future. People are going to be pissed when the 3080/3070's drop with these 20 and 16Gb interfaces. Because it just makes sense. And the people on here bantering about it being sad that they release their cards the way they do or how they do need to get something extra going on in life... Most of these comments about ram size on here are most likely made by people who don't even have that much system RAM!! Comment on something that is tangible to you and not some negativity about your pipe dreams.
data/avatar/default/avatar26.webp
was not expect those numbers, though I still think professional reviews will tell the truth. Even more reason to wait to see If Big Navi is going to come close to all the hype.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
geogan:

It's funny that people don't believe it is only 10-20% faster, given benchmarks and hardware facts about numbers of shaders/ROPs. Don't understand how or why anyone thinks it could be any faster. And no way that an extra 14GB of VRAM costs 850 (1500-650) so don't anyone say the price is justified because of extra VRAM.
Simple answer is: "People ignore TBP."
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Fediuld:

If there is not a big performance gap like 40% between 3080 to 3090 there won't be a 3080Ti while we already we know there are 2 3080 models with different VRAM. As for the physical die size, Nvidia cannot chop it as they see fit. Has to be done in certain way to work. Regarding the DX12/Vulkan Async Compute for mGPU wasn't some in here saying it was pointless when Nvidia couldn't scale over 50% while AMD could do 100% scaling? At least until people bothered to run such benchmarks 2 years ago with Vega 64 and 1080Ti. How many games have we seen with Async Compute all those years? Barely a handful and none the last 2 years. (Last one was Resident Evil remastered). Nvidia just drops the support, like it did with the Nvision kit. And what happened to PhysX/Gameworks? When was the last time we saw a big game supporting it and not some small studio making a C-rated game just to use the Nvidia grand?
Almost every DX12/Vulkan game uses Async Compute, there just isn't options to turn it on/off. PhysX is the default Physics engine in most engines and just recently got an update (https://news.developer.nvidia.com/announcing-nvidia-physx-sdk-5-0/). Most, if not all RTX games are using Gameworks libraries. Your posts continue to be rife with misinformation.
geogan:

It's funny that people don't believe it is only 10-20% faster, given benchmarks and hardware facts about numbers of shaders/ROPs. Don't understand how or why anyone thinks it could be any faster. And no way that an extra 14GB of VRAM costs 850 (1500-650) so don't anyone say the price is justified because of extra VRAM.
14GB doesn't cost that much but it makes the performance punch up next to expensive Quadro units in certain workloads. To avoid cannibalizing those sales they increase the price.
data/avatar/default/avatar19.webp
Probably be lot more once it's overclocked, reckon it's conservative due to temps and power. Kingpin 2080 ti was 30% faster than standard 2080 ti cause they removed power limits an ramped overclocks on it. Kingpin 3090 will be lot faster than this.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Huggi:

So I guess that means little chance of a 3080 Ti/Super variant since there's not enough performance difference to slot in another SKU between the two. My guess would be that the 20GB version of the 3080 is what will occupy the price gap between the standard 3080 and the 3090.
There still might be one, but only with more VRAM (and not necessarily with more memory bandwidth, but maybe) and maybe higher clock speeds. I've mentioned before that there's a large market out there demanding more VRAM (regardless of whether it is actually needed), so if Nvidia is smart, they'll tap into it.
lukas_1987_dion:

More VRAM is not about having more performance, just less stutters, smoother gameplay as game can use much more VRAM for cache which is much faster than system RAM
Yes and no - the stuttering happens when the game needs to unload a buffer/cache and swap in new assets. But, if a game actually demands more VRAM than what your GPU can supply (in other words, there's not even room for a buffer) then it will need to mooch off your system memory via the PCIe bus. This can have a dramatic loss in performance, depending how much more it needs.
geogan:

It's funny that people don't believe it is only 10-20% faster, given benchmarks and hardware facts about numbers of shaders/ROPs. Don't understand how or why anyone thinks it could be any faster.
I expected it to be a little faster (up to 5% faster) but it's obvious that this GPU was not built with 1080p in mind. It seems to fare much better in 4K due to the extra cores and memory bandwidth,
https://forums.guru3d.com/data/avatars/m/256/256969.jpg
Basically it would mean that if you're not overclocking the 3090, you could have exactly the same performance buying a custom factory overclocked 3080 costing 600$ less. Lower tier 3090, the worst deal ever ? Really wondering when the 1000-1200$ 3080-ti (naming has been confirmed by leaked Gigabyte afterall) are gonna be released/Announced. When is the next Nvidia event/conference?
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
XenthorX:

Basically it would mean that if you're not overclocking the 3090, you could have exactly exactly the same performance buying a custom factory overclocked 3080 costing 600$ less. Lower tier 3090, the worst deal ever ?
Kinda no different than the Titans honestly. It's more or less a branding problem. Idk, I'm convinced Nvidia pulled a last minute switcharoo with these cards. I think the 3080 was originally supposed to be a 3080Ti, the 3070 was supposed to be the 3080 and the 3090 was supposed to be a Titan priced at $2000-3000. The entire series makes more sense when you think about it this way. Why is a 3080 a GA102 and not GA104? Why does the '3090' even exist? I think Nvidia had these things in production, realized AMD was going to be more competitive than they first imagined, then shifted everything around.