AMD Ryzen 5 5600 review
PowerColor RX 6650 XT Hellhound White review
FSP Hydro PTM Pro (1200W PSU) review
ASUS ROG Radeon RX 6750 XT STRIX review
AMD FidelityFX Super Resolution 2.0 - preview
Sapphire Radeon RX 6650 XT Nitro+ review
Sapphire Radeon RX 6950 XT Sapphire Nitro+ Pure review
Sapphire Radeon RX 6750 XT Nitro+ review
MSI Radeon RX 6950 XT Gaming X TRIO review
MSI Radeon RX 6750 XT Gaming X TRIO review
Review: Hitman 2016: PC graphics performance benchmarks
We peek at the all new Hitman (2016) in our usual ways. We have a look at performance with the newest graphics cards and technologies. We'll test the game on the PC platform relative towards graphics card performance with the latest AMD/NVIDIA graphics card drivers. Multiple graphics cards are being tested and benchmarked.
Read the article right here.
« Unreal Engine Sizzle Reel 2016 GDC Video · Review: Hitman 2016: PC graphics performance benchmarks
· FSP Adds Hydro Series 80 Plus Bronze Power Supplies »
Ieldra
Senior Member
Posts: 3490
Joined: 2007-01-27
Senior Member
Posts: 3490
Joined: 2007-01-27
#5266640 Posted on: 05/04/2016 02:13 AM
So a whopping 2.28 fps boost from 1Ghz vram OC
Imo half of that is enough to have a slight edge over reference and still keep its powerlimit and heat down.
btw, That initial slowdown could be vram buffering the scene, if it takes that into account then its a ****ty benchmark to begin with
, most have like a 4-5sec pre buffer waiting time until they start measuring.
I don't have that in DX11 though.
DX11 is almost 50% faster in the first scene... ridiculous. It probably owes me an extra 2 avg fps over the whole run
Anyway, now I've put the async thing to rest in two different games, can we all agree to just drop it for a while ?
So a whopping 2.28 fps boost from 1Ghz vram OC
Imo half of that is enough to have a slight edge over reference and still keep its powerlimit and heat down.

btw, That initial slowdown could be vram buffering the scene, if it takes that into account then its a ****ty benchmark to begin with

I don't have that in DX11 though.
DX11 is almost 50% faster in the first scene... ridiculous. It probably owes me an extra 2 avg fps over the whole run
Anyway, now I've put the async thing to rest in two different games, can we all agree to just drop it for a while ?
PrMinisterGR
Senior Member
Posts: 7975
Joined: 2014-09-27
Senior Member
Posts: 7975
Joined: 2014-09-27
#5266645 Posted on: 05/04/2016 02:44 AM
hahahaha of course you would say that
it runs relatively badly here, doesn't scale well with overclock, it looks like ass, plays like ass, always online-drm, benchmark doesn't even ****ing show you a results screen, dx12 takes 15 minutes to launch after latest patch
it doesn't run badly at 1440p, but I've seen far better looking games running far better.
I was gifted the damn game to benchmark it and now i think im gonna tell the guy to refund it
On the other hand, you don't refute it might be CPU bound. Even in the consoles things go slowly in Sapienza, due to the amount of interaction you can have on the environment. Start killing people and create piles and you'll see what I mean.
I don't have that in DX11 though.
DX11 is almost 50% faster in the first scene... ridiculous. It probably owes me an extra 2 avg fps over the whole run
Anyway, now I've put the async thing to rest in two different games, can we all agree to just drop it for a while ?
The async for what? How is it put to rest? What? There is a combination of things that might cause the game to go bad for you. The initial load you describe under DX12 sounds like shader compilation for me. Since that doesn't happen with Radeon cards, that's probably the NVIDIA driver. If DX11 is that much faster than DX12, that sounds again like HW/the NVIDIA driver. It's not like it would be the first time with a DX12 game.
hahahaha of course you would say that
it runs relatively badly here, doesn't scale well with overclock, it looks like ass, plays like ass, always online-drm, benchmark doesn't even ****ing show you a results screen, dx12 takes 15 minutes to launch after latest patch
it doesn't run badly at 1440p, but I've seen far better looking games running far better.
I was gifted the damn game to benchmark it and now i think im gonna tell the guy to refund it
On the other hand, you don't refute it might be CPU bound. Even in the consoles things go slowly in Sapienza, due to the amount of interaction you can have on the environment. Start killing people and create piles and you'll see what I mean.
I don't have that in DX11 though.
DX11 is almost 50% faster in the first scene... ridiculous. It probably owes me an extra 2 avg fps over the whole run
Anyway, now I've put the async thing to rest in two different games, can we all agree to just drop it for a while ?
The async for what? How is it put to rest? What? There is a combination of things that might cause the game to go bad for you. The initial load you describe under DX12 sounds like shader compilation for me. Since that doesn't happen with Radeon cards, that's probably the NVIDIA driver. If DX11 is that much faster than DX12, that sounds again like HW/the NVIDIA driver. It's not like it would be the first time with a DX12 game.
Ieldra
Senior Member
Posts: 3490
Joined: 2007-01-27
Senior Member
Posts: 3490
Joined: 2007-01-27
#5266650 Posted on: 05/04/2016 03:06 AM
The async for what? How is it put to rest? What? There is a combination of things that might cause the game to go bad for you. The initial load you describe under DX12 sounds like shader compilation for me. Since that doesn't happen with Radeon cards, that's probably the NVIDIA driver. If DX11 is that much faster than DX12, that sounds again like HW/the NVIDIA driver. It's not like it would be the first time with a DX12 game.
You misunderstand me, I've put to rest the notion that async shaders will give AMD an inherent advantage, this is 2/2 games so far that use it in which the Fury X still performs worse or the same(AotS) despite async shaders with their supposed +10%
Notice also that 1490x2x2816= 8.4 tflops, so the Fury X actually has a 2.3% advantage in shader throughput
Anyway, I just tested 4k (1080p SSAA) and I get 43 fps 1490/7000
For reference
Screenshot or it didn't happen

The async for what? How is it put to rest? What? There is a combination of things that might cause the game to go bad for you. The initial load you describe under DX12 sounds like shader compilation for me. Since that doesn't happen with Radeon cards, that's probably the NVIDIA driver. If DX11 is that much faster than DX12, that sounds again like HW/the NVIDIA driver. It's not like it would be the first time with a DX12 game.
You misunderstand me, I've put to rest the notion that async shaders will give AMD an inherent advantage, this is 2/2 games so far that use it in which the Fury X still performs worse or the same(AotS) despite async shaders with their supposed +10%
Notice also that 1490x2x2816= 8.4 tflops, so the Fury X actually has a 2.3% advantage in shader throughput

Anyway, I just tested 4k (1080p SSAA) and I get 43 fps 1490/7000
For reference
Screenshot or it didn't happen

Fox2232
Senior Member
Posts: 11809
Joined: 2012-07-20
Senior Member
Posts: 11809
Joined: 2012-07-20
#5266713 Posted on: 05/04/2016 09:09 AM
You misunderstand me, I've put to rest the notion that async shaders will give AMD an inherent advantage, this is 2/2 games so far that use it in which the Fury X still performs worse or the same(AotS) despite async shaders with their supposed +10%
Notice also that 1490x2x2816= 8.4 tflops, so the Fury X actually has a 2.3% advantage in shader throughput
Anyway, I just tested 4k (1080p SSAA) and I get 43 fps 1490/7000
For reference
Screenshot or it didn't happen
Why did not use DSR from 4K instead of "mambo jumbo, no one knows" SSAA some settings?
Anyway, if you want to compare Shader performance... Do you remember our lonely effort on driver evaluation?
Catzilla Fur Test results are probably closest comparing: "Notice also that 1490x2x2816= 8.4 tflops, so the Fury X actually has a 2.3% advantage in shader throughput
"
You misunderstand me, I've put to rest the notion that async shaders will give AMD an inherent advantage, this is 2/2 games so far that use it in which the Fury X still performs worse or the same(AotS) despite async shaders with their supposed +10%
Notice also that 1490x2x2816= 8.4 tflops, so the Fury X actually has a 2.3% advantage in shader throughput

Anyway, I just tested 4k (1080p SSAA) and I get 43 fps 1490/7000
For reference
Screenshot or it didn't happen

Why did not use DSR from 4K instead of "mambo jumbo, no one knows" SSAA some settings?
Anyway, if you want to compare Shader performance... Do you remember our lonely effort on driver evaluation?
Catzilla Fur Test results are probably closest comparing: "Notice also that 1490x2x2816= 8.4 tflops, so the Fury X actually has a 2.3% advantage in shader throughput

Click here to post a comment for this news story on the message forum.
Senior Member
Posts: 17421
Joined: 2012-05-18
So a whopping 2.28 fps boost from 1Ghz vram OC
Imo half of that is enough to have a slight edge over reference and still keep its powerlimit and heat down.
btw, That initial slowdown could be vram buffering the scene, if it takes that into account then its a ****ty benchmark to begin with