3DMark Benchmarks for GeForce RTX 3070 surface

Published by

Click here to post a comment for 3DMark Benchmarks for GeForce RTX 3070 surface on our message forum
https://forums.guru3d.com/data/avatars/m/270/270008.jpg
Robbo9999:

So 3080 is about 30% faster than 3070 whilst having 50% more cores, hmm! Doesn't seem quite right to me, although I did hear somewhere that something to do with this architecture means it doesn't scale particularly well with increased cores. I bought 3080 (still in preorder queue) as I was sure it was gonna be better value as 50% more cores for less than 50% more money......if these benchmarks hold up then this might not have been such a great deal for me.
I don't doubt this is legit. Look at the 3090 compared to the 3080. As the cores scale up the performance is not scaling lineally. Not sure why but it looks like something like a 3070 Ti with cores in between the 3070 and 3080 may be the sweet spot.
https://forums.guru3d.com/data/avatars/m/270/270008.jpg
Denial:

We've known AMD's implementation of RT in hardware for a while. https://media.discordapp.net/attachments/660560143162802196/744951794400100362/fqvK7bgMNGxQdNKNnHKZHQ-1366-80.png?width=804&height=452 https://www.freepatentsonline.com/20190197761.pdf They utilize the texture units/shaders for RT. They don't have dedicated hardware for it, but it's also not "software". Igors results are realistic in the sense that it will probably have a bigger hit to performance in RT workloads, but whether or those graphs are real or fake is up for debate.
It will be interesting to see. AMD's rumored 80 CU card should have enough extra cores for most workloads. It might be like Nvidia's first generation where the lower end cards don't fare that well at RT but the highest end sku is pretty decent.
https://forums.guru3d.com/data/avatars/m/229/229509.jpg
Faster than nVidia and at lower power, from what I'm hearing...
https://forums.guru3d.com/data/avatars/m/118/118568.jpg
First time in a long time we've seen some possibly decent competition. Was really looking forward to the 3070, releases on my birthday even, but I'm in no rush, really want to see how AMD can compete. Even if it performs beneath the top of last gen but still priced competitively.
https://forums.guru3d.com/data/avatars/m/245/245459.jpg
Fox2232:

Ashes test shows 3080 as 13/16/20% faster than 3070. In Time Spy mean graphics (fps) results for 3080 are 26/28% higher than 3070. In Time Spy Extreme mean graphics (fps) results for 3080 are 23/23% higher than 3070. In FS Extreme mean graphics (fps) results for 3080 are 28/18% higher than 3070. In FS Ultra mean graphics (fps) results for 3080 are 29/18% higher than 3070. And from mean results I picked only results that used top intel's CPUs. (3DMark is synthetic test with rather small memory footprint.)
Yeah, I went with best case when I said 30%, as I calculated a few of them and realised the TimeSpy was near the top of the pile at 28% - I rounded up to 30%! πŸ˜€ Given that this is a rumour/leak, and given 3080 has 50% more cores I also felt 30% might be more realistic for a performance difference vs the lower figures. We'll see, maybe it will only be 20-30% faster.
Silva:

No great deal comes from pre ordering something, people haven't learned that, yet.
I really don't have a problem with my pre-order, I got in before the final price rises as I knew it was only gonna go up in price, and I knew it was the card I wanted as AMD is not an option with my GSync monitor....it's right at the highest limit that I would spend on a graphics card though, so I'm not so pleased about that. I also chose the Asus TUF version (non OC), which turned out to be a wise choice as it's one of the best ones and certainly for performance/build/value/noise/temperature, was influenced by the Guru3d review (yep I know it was a review of the OC version).
JamesSneed:

I don't doubt this is legit. Look at the 3090 compared to the 3080. As the cores scale up the performance is not scaling lineally. Not sure why but it looks like something like a 3070 Ti with cores in between the 3070 and 3080 may be the sweet spot.
With previous generations, like Kepler for instance, I think I remember doing some performance calculations on core count and it scaled almost linearly for them with increased core count, as long as there's no power limitations....I figured it would be the same with this Ampere generation as long as the 3080 wasn't power limited. At launch though we see the 3080 is power limited in a lot of scenarios, but vBIOS with "unlimited power" didn't net much of a performance increase for the 3080, so I don't think the 3080 is significantly power limited. Is 3090 more power limited than the 3080, maybe that's why 3090 performance didn't scale linearly with increased cores vs 3080. Although thinking about it, it was a bit of a red herring that the 3080 has 50% more cores than the 3070 yet didn't cost 50% more than the 3070....it could have been a bit too good to be true to have the 3080 being better performance value than the 3070 - that was one of the main reasons I chose 3080 over 3070, but hell...even if it only turns out to be 20-30% faster than the 3070 I'll still take it as my decision as I won't be upgrading this card for a very long time. And I'm certainly very happy with the performance level of the 3080 in absolute terms, that doesn't change.
https://forums.guru3d.com/data/avatars/m/282/282600.jpg
Supertribble:

Is this real?
The image is real, if the results are real who knows but I doubt Igor is going to be uploading stuff without it being atleast semi accurate and from other leakers on twitter seems to hold true
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
ACEB:

The image is real, if the results are real who knows but I doubt Igor is going to be uploading stuff without it being atleast semi accurate and from other leakers on twitter seems to hold true
The timespy numbers are suspect. The 3080 is significantly faster than a 2080ti in timespy but his graph makes it look like 10%.
https://forums.guru3d.com/data/avatars/m/235/235224.jpg
Denial:

We've known AMD's implementation of RT in hardware for a while. https://www.freepatentsonline.com/20190197761.pdf They utilize the texture units/shaders for RT. They don't have dedicated hardware for it, but it's also not "software". Igors results are realistic in the sense that it will probably have a bigger hit to performance in RT workloads, but whether or those graphs are real or fake is up for debate.
The performance leaks seem to be inline with with the architecture I suppose, by really going for that rasterized performance crown they'll take a larger RT performance hit.
data/avatar/default/avatar22.webp
Hilbert Hagedoorn:


Untitled-2.png
Ah. Graphics scores. Thanks. I missed that (clearly)
https://forums.guru3d.com/data/avatars/m/270/270008.jpg
Robbo9999:

Yeah, I went with best case when I said 30%, as I calculated a few of them and realised the TimeSpy was near the top of the pile at 28% - I rounded up to 30%! πŸ˜€ Given that this is a rumour/leak, and given 3080 has 50% more cores I also felt 30% might be more realistic for a performance difference vs the lower figures. We'll see, maybe it will only be 20-30% faster. I really don't have a problem with my pre-order, I got in before the final price rises as I knew it was only gonna go up in price, and I knew it was the card I wanted as AMD is not an option with my GSync monitor....it's right at the highest limit that I would spend on a graphics card though, so I'm not so pleased about that. I also chose the Asus TUF version (non OC), which turned out to be a wise choice as it's one of the best ones and certainly for performance/build/value/noise/temperature, was influenced by the Guru3d review (yep I know it was a review of the OC version). With previous generations, like Kepler for instance, I think I remember doing some performance calculations on core count and it scaled almost linearly for them with increased core count, as long as there's no power limitations....I figured it would be the same with this Ampere generation as long as the 3080 wasn't power limited. At launch though we see the 3080 is power limited in a lot of scenarios, but vBIOS with "unlimited power" didn't net much of a performance increase for the 3080, so I don't think the 3080 is significantly power limited. Is 3090 more power limited than the 3080, maybe that's why 3090 performance didn't scale linearly with increased cores vs 3080. Although thinking about it, it was a bit of a red herring that the 3080 has 50% more cores than the 3070 yet didn't cost 50% more than the 3070....it could have been a bit too good to be true to have the 3080 being better performance value than the 3070 - that was one of the main reasons I chose 3080 over 3070, but hell...even if it only turns out to be 20-30% faster than the 3070 I'll still take it as my decision as I won't be upgrading this card for a very long time. And I'm certainly very happy with the performance level of the 3080 in absolute terms, that doesn't change.
I hear you I wasn't trying to say the 3080 or 3090 are bad cards but there is certainly a bottleneck as the performance isn't scaling up with cores. This does leave a little more wiggle room for AMD to be competitive at the high end that is if AMD's cards scale up with cores and frequency. I'm happy Nvidia didn't completely knock it out of the park because we all really need AMD competitive again. I very much want to see them leapfrogging each other every generation.
https://forums.guru3d.com/data/avatars/m/245/245459.jpg
JamesSneed:

I hear you I wasn't trying to say the 3080 or 3090 are bad cards but there is certainly a bottleneck as the performance isn't scaling up with cores. This does leave a little more wiggle room for AMD to be competitive at the high end that is if AMD's cards scale up with cores and frequency. I'm happy Nvidia didn't completely knock it out of the park because we all really need AMD competitive again. I very much want to see them leapfrogging each other every generation.
We're not gonna know for certain until the launch of AMD about whether NVidia knocked it out the park, but unlikely they have. But, but, but, NVidia did kinda knock it out the park in terms of performance improvement over Turing for the same money, due to the large percentage increase in performance over the prior cards, eg comparing 3080 vs 2080, so that's still a knock out which is why people went mad for this generation of NVidia cards.....but yeah seems like AMD are gonna be on par or pretty close to what NVidia has presented with this 3000 series generation.
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
the 3070 is the marker of nvidia's success or failure with Ampere. the uArch is elegant, and if i may add the availability of the 3070 is tied to the yield rate of the full fat Ampere. i suspect there is already plenty of (poorly) binned Ampere gpus waiting for a 3070 designation given the supply issues w/ 3090/80. in any case, this was always going to be expensive and nvidia thought it could deal with the demand (tho' this maybe too far in creating it?).
data/avatar/default/avatar18.webp
Interesting.. my 2080Ti is doing 42fps in Port Royal test and 44,5fps after OC. NVidia said 3070 is slightly faster than 2080Ti but will see.
https://forums.guru3d.com/data/avatars/m/273/273678.jpg
tunejunky:

and if i may add the availability of the 3070 is tied to the yield rate of the full fat Ampere
There is no relation what so ever between GA100 and GA104.
https://forums.guru3d.com/data/avatars/m/254/254103.jpg
Neat. So is more than one shipment of the 3070s planned or are they also going to be available exclusively via scalpers on eBay like the 3080s and 3090s? Inb4 "iT's nOt HArD tO gEt ONe yOu JusT hAvE tO kEeP hiTtInG f5 24/7 tiLl u SeE iT"
https://forums.guru3d.com/data/avatars/m/270/270008.jpg
Robbo9999:

We're not gonna know for certain until the launch of AMD about whether NVidia knocked it out the park, but unlikely they have. But, but, but, NVidia did kinda knock it out the park in terms of performance improvement over Turing for the same money, due to the large percentage increase in performance over the prior cards, eg comparing 3080 vs 2080, so that's still a knock out which is why people went mad for this generation of NVidia cards.....but yeah seems like AMD are gonna be on par or pretty close to what NVidia has presented with this 3000 series generation.
Yeah' I'm not picking a side I just don't hold either near and dear to my heart. I'm not extremely impressed with Nvidia this round but it is considerably better than the 2xxx generation which was terrible if you factored in cost. The GTX 980 Ti was 75% more performance per dollar over the GTX 780, the GTX 1080 Ti was 77% more performance per dollar over the GTX 980 Ti, the 2080 Ti was a -32% performance per dollar compared to the GTX 1080 Ti, and finally the 3090 is 21% more performance per dollar over the 2080 Ti. So its better just we are all paying a very large premium due to AMD's lack of competition compared to the GTX 980 Ti and GTX 1080 Ti days.
https://forums.guru3d.com/data/avatars/m/220/220188.jpg
Denial:

We've known AMD's implementation of RT in hardware for a while. https://media.discordapp.net/attachments/660560143162802196/744951794400100362/fqvK7bgMNGxQdNKNnHKZHQ-1366-80.png?width=804&height=452 https://www.freepatentsonline.com/20190197761.pdf They utilize the texture units/shaders for RT. They don't have dedicated hardware for it, but it's also not "software". Igors results are realistic in the sense that it will probably have a bigger hit to performance in RT workloads, but whether or those graphs are real or fake is up for debate.
soo the entire GPU can work as RT hardware? That might be a better approach in the long run, lots of flexibility on how to dynamically allocate resources, unless nvidia's dedicated RT core is faster than AMD's entire chip for RT tasks
https://forums.guru3d.com/data/avatars/m/282/282392.jpg
EspHack:

soo the entire GPU can work as RT hardware? That might be a better approach in the long run, lots of flexibility on how to dynamically allocate resources, unless nvidia's dedicated RT core is faster than AMD's entire chip for RT tasks
Benchmarks posted on the forum of big navi show it trading blows with 3080, but with RT enabled it trades with 2080ti, beating it by a small margin.
https://forums.guru3d.com/data/avatars/m/181/181063.jpg
Undying:

Available on october 29th 2021. πŸ˜€
If ever...:D It is clear for me from all the reviews, benchmarks, etc that these Ampere cards are so damn fast that you can't even catch them in stores.
https://forums.guru3d.com/data/avatars/m/282/282426.jpg
Considering my non-oc RTX 2080 Ti could manage 9574 Points in Port Royal using a i7-4770k lol or 10,250 with my i7-10700k and all the other benchmark scores are all behind by a decent margin. If those 3070 scores are correct, that is a bit odd. Even the mean scores of the RTX 3080 are only 5% off the RTX 2080 Ti's .. https://www.userbenchmark.com/UserRun/34598063 As for pricing I paid $1200 AUD for mine new back in Sept 2018 and the 3080s are going for a fair bit more $ over here in Aus. The RRP sure didn't translate. I ain't no old card fan boy, as I am keen for an upgrade, since I game at 4K, HDR and all the goodies on.. but the comparisons Nvidia provided using a 250w limited 2080ti (mine is 338w) seems almost fraud to make an announcement in his kitchen that even the 3070 is faster.. Also, that Doom Eternal Comparison, I ran the same intro part where you meet the boss @ 4K maxed and even with HDR on (they didn't) and on avg was 20-30fps faster than what ever 2080Ti they used and within 1-15fps of the 3080. Leather Jacket man - you have hurt some feels and some bro's. Have friends who ordered the cards pre-release and still haven't got a 3080. 😏poor bastards.