3DMark Time Spy Result shows Radeon RX 5700 XT close to GeForce RTX 2070

Published by

Click here to post a comment for 3DMark Time Spy Result shows Radeon RX 5700 XT close to GeForce RTX 2070 on our message forum
data/avatar/default/avatar17.webp
How is a GPU score of 8719 surpassing a score of 8901? Am i missing something?
https://forums.guru3d.com/data/avatars/m/16/16662.jpg
Administrator
Crazy Joe:

How is a GPU score of 8719 surpassing a score of 8901? Am i missing something?
Whoops, I must have had another test result in mind πŸ˜‰
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
Grumpymangrumbling2019:

Be different in games i think, these kind of benchmarks are not indicative of gaming performance.
Yeah, because actual game benchmarks only tell about that particular game, or at most the particular engine, performance as well. When reading the long Guru3D reviews, you see AMD/Nvidia cards that are close enough to each other outperform the other or lose to the other purely depending on the game. Only true monsters like 2080 Ti leave all other cards to eat the dust unequivocally. In other words, you only get a very rough general feeling from any one performance measurement (unless you are only buying a card to play a specific title and find that title included in a review). They are still highly useful, nonetheless, especially when you see all of the tests.
https://forums.guru3d.com/data/avatars/m/212/212018.jpg
Hilbert Hagedoorn:

Whoops, I must have had another test result in mind πŸ˜‰
SPOILER ALERTTTTT
https://forums.guru3d.com/data/avatars/m/165/165018.jpg
Hilbert Hagedoorn:

Whoops, I must have had another test result in mind πŸ˜‰
Hmmm foreshadowing of another particular soon to be released AMD product perhaps.....?
data/avatar/default/avatar28.webp
Grumpymangrumbling2019:

So 1% faster than my Vega 64 LC @ +50 power 1100 hbm2 clock. Graphics Test 1 57.85 fps Graphics Test 2 45.67 fps Be different in games i think, these kind of benchmarks are not indicative of gaming performance.
And it cost 250$ more and has much higher power consumption I assume. πŸ™‚
https://forums.guru3d.com/data/avatars/m/237/237771.jpg
Goiur:

SPOILER ALERTTTTT
Possibly a super spoiler?!? However these numbers look more in line with the non-XT though Nvidia has been doing good in timespy compared to AMD.
Grumpymangrumbling2019:

i set my power to -32% minus.. it then uses 180 watts and i loose no more than single digit performance. AMD suck with their bios setting, they are willing to throw efficiency out the window for the sake of a few % gains. will soon see if they have listened to people the last few years. People care about efficiency.
People don’t care about efficiency. They care about heat and noise. But those usually go hand in hand.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Grumpymangrumbling2019:

AMD suck with their bios setting, they are willing to throw efficiency out the window for the sake of a few % gains.
AMD may have gone a bit overboard with stock voltage, but seeing as AMD is (or should be) aware that people are under-volting for the sake of "a few % gains", that seems to be going in the exact opposite direction of what you claim. What AMD cares about is guaranteed stability, because a GPU that thermal-throttles is better than a GPU that starts artifacting or suffers data corruption.
https://forums.guru3d.com/data/avatars/m/258/258688.jpg
I own the paid-for version of 3dMark, and to tell the truth I often wonder why I bought it--probably latent guilt for using the software since it was called FutureMark, if not before, without buying it--using the demo version, etc.. It's just nuts to see people obsessing over what appear to me to be tiny differences in the numbers it spits out, as if there was some profound meaning there--if only I in my limited mentality was able to see it, of course. I confess I still can't see it. 200-point differences mean nothing to me--indeed, sometimes 2000-point differences can be very misleading--depends on the benchmark, test conditions, etc. There's just something about synthetic benchmarks that I find insulting at a basic level, not picking on 3dMark specifically--because I will still accuse the person who swears to me on a stack of Bibles that he can tell the difference between a game running at 90 fps and a game running at 100 fps of lying to me (and maybe himself)....;) Every time. I feel as though benchmarks are usually also lying to me the same way--attempting to manipulate my perceptions in some fashion towards a predetermined end. So even though I bought 3d Mark, I don't really listen to 3d Mark all that closely, if you know what I mean....! I think that what is significant about this 3dMark result is the fact that it reassures us that NAVI from AMD is not vaporware (as if maybe we weren't really sure about that). But apart from that I don't see much from the numbers themselves that I would term "definitive."
https://forums.guru3d.com/data/avatars/m/63/63215.jpg
I don't even bother looking at the 3dmark results for the last few years. Only game fps matters.
data/avatar/default/avatar08.webp
Hmm, that might be indicative making true the AMD only slide showing 5700XT against Vega 56, making the 5700XT closer to Radeon VII at 2560x1440, and that on the "gaming" speed at 1570s, assuming the slide doesn't have some smallprints. God knows what could do maintaining the 1905 clock, which will push the card well into RTX2080 and Titan Xp performance region. Assuming that the AMD slide is valid and doesn't have "smallprint". One week to find out i guess.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
schmidtbag:

AMD may have gone a bit overboard with stock voltage, but seeing as AMD is (or should be) aware that people are under-volting for the sake of "a few % gains", that seems to be going in the exact opposite direction of what you claim. What AMD cares about is guaranteed stability, because a GPU that thermal-throttles is better than a GPU that starts artifacting or suffers data corruption.
I think we are about to see same clock control method as in Zen. That may complicate things a bit or bit more. But as long as voltage offset works, we will be fine. (Or as long as we have access to voltage table.)
data/avatar/default/avatar26.webp
Grumpymangrumbling2019:

So 1% faster than my Vega 64 LC @ +50 power 1100 hbm2 clock. Graphics Test 1 57.85 fps Graphics Test 2 45.67 fps Be different in games i think, these kind of benchmarks are not indicative of gaming performance.
GCN cards have huge issue on the way TimeSpy benchmark is written. Since it is using a single execution path designed for Pascal (similar to Firestrike), and not using proper Async Compute multi thread down to metal execution as per DX12 requires. That is why also DX12 games, which are written properly for down to metal, using different execution path for Nvidia & AMD, show such discrepancy between TimeSpy performance of a card, and the game benchmarks. Having Nvidia Pascal (and prior) suffering compared to GCN based cards on DX12 and especially Vulcan. If the blower 5700XT with it's 1755 clock shows such perf gains on TimeSpy, with bit tweaking pushing to 1900 boost clock constantly, it would be faster than the RVII and in the zone of TXp and 2080. The 2070FE is already operating at around 1905 and cannot go higher much regardless the chip.
data/avatar/default/avatar33.webp
Stormyandcold:

I don't even bother looking at the 3dmark results for the last few years. Only game fps matters.
Same here!
https://forums.guru3d.com/data/avatars/m/150/150085.jpg
Grumpymangrumbling2019:

So 1% faster than my Vega 64 LC @ +50 power 1100 hbm2 clock. Graphics Test 1 57.85 fps Graphics Test 2 45.67 fps Be different in games i think, these kind of benchmarks are not indicative of gaming performance.
Fediuld:

GCN cards have huge issue on the way TimeSpy benchmark is written. Since it is using a single execution path designed for Pascal (similar to Firestrike), and not using proper Async Compute multi thread down to metal execution as per DX12 requires. That is why also DX12 games, which are written properly for down to metal, using different execution path for Nvidia & AMD, show such discrepancy between TimeSpy performance of a card, and the game benchmarks. Having Nvidia Pascal (and prior) suffering compared to GCN based cards on DX12 and especially Vulcan. If the blower 5700XT with it's 1755 clock shows such perf gains on TimeSpy, with bit tweaking pushing to 1900 boost clock constantly, it would be faster than the RVII and in the zone of TXp and 2080. The 2070FE is already operating at around 1905 and cannot go higher much regardless the chip.
If I recall correct I think it boils down to Time Spy does not send enough data to the GCN architecture do to its use of context switching. But here is the original link: https://steamcommunity.com/app/223850/discussions/0/366298942110944664/
data/avatar/default/avatar35.webp
Eastcoasthandle:

If I recall correct I think it boils down to Time Spy does not send enough data to the GCN architecture do to its use of context switching. But here is the original link: https://steamcommunity.com/app/223850/discussions/0/366298942110944664/
https://www.overclock.net/forum/21-benchmarking-software-discussion/1606224-various-futuremark-s-time-spy-directx-12-benchmark-compromised-less-compute-parallelism-than-doom-aots-also.html#post_25358335 Too technical but the bottom line is DX12 requires the games/software using the API to have down to metal optimization per architecture. Futuremark said shod this, and made it tailored for Pascal, so it would run on previous Nvidia cards also (Maxwel) who do not had back in 2016 DX12 support. And ofc Nvidia has no Async Compute with Pascal let alone previous ones. So Timespy execution is no different than the Firestrike which is for DX11. Hence even Hawaii GPUs (290/290X with proper async compute) are giving a huge beating today 6 years later on DX12 games on all Nvidia contemporaries (780Ti, Titan Black) and Maxwell based cards.
https://forums.guru3d.com/data/avatars/m/198/198862.jpg
In firestrike 5700xt beat even 2070S but in timespy it lost. Cant wait to see custom overclocked 5700's. All eyes whould be at non-xt version, have feeling its gonna be best bang for the buck.
https://forums.guru3d.com/data/avatars/m/237/237771.jpg
Undying:

In firestrike 5700xt beat even 2070S but in timespy it lost. Cant wait to see custom overclocked 5700's. All eyes whould be at non-xt version, have feeling its gonna be best bang for the buck.
At $299 it already is without AIB coolers.
https://forums.guru3d.com/data/avatars/m/150/150085.jpg
Fediuld:

https://www.overclock.net/forum/21-benchmarking-software-discussion/1606224-various-futuremark-s-time-spy-directx-12-benchmark-compromised-less-compute-parallelism-than-doom-aots-also.html#post_25358335 Too technical but the bottom line is DX12 requires the games/software using the API to have down to metal optimization per architecture. Futuremark said shod this, and made it tailored for Pascal, so it would run on previous Nvidia cards also (Maxwel) who do not had back in 2016 DX12 support. And ofc Nvidia has no Async Compute with Pascal let alone previous ones. So Timespy execution is no different than the Firestrike which is for DX11. Hence even Hawaii GPUs (290/290X with proper async compute) are giving a huge beating today 6 years later on DX12 games on all Nvidia contemporaries (780Ti, Titan Black) and Maxwell based cards.
IMHO, TimeSpy should be rebuked by many for this.