AMD Announces RX 5000 Series Graphics processors at Computex - Demos RX 5700

Published by

Click here to post a comment for AMD Announces RX 5000 Series Graphics processors at Computex - Demos RX 5700 on our message forum
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
It actually was slightly faster ~10% than the 2070. That being said Strange Brigade is heavily AMD favored.. this being a new architecture might change that but until I see another game I'm going to assume the card they showed will be around 2070 performance in other games. Still pretty cool that it's a new architecture tho
https://forums.guru3d.com/data/avatars/m/156/156348.jpg
Denial:

It actually was slightly faster ~10% than the 2070. That being said Strange Brigade is heavily AMD favored.. this being a new architecture might change that but until I see another game I'm going to assume the card they showed will be around 2070 performance in other games. Still pretty cool that it's a new architecture tho
What is interesting if true is the 1.5x better power consumption. Performance wise Vega was not that bad. I mean the Vega 56 was just as good as the 1070. The main problem was power consumption and temp (and selling at the same price as nVidia while releasing the card at a later date). If they can address the power consumption and release something in the neighbourhood of the 2080 and 2070 while undercutting nVidia (and forcing them to adjust the price) then i'll be more than happy.
https://forums.guru3d.com/data/avatars/m/118/118821.jpg
vega56 was better than the 1070 - thats entirely why nvidia released the 1070ti. but yes, power consumption was pretty damn bad for vega, even if undervolted im half irritated that i have to wait until E3 for actual architecture & model-specific details, but its not that far away
https://forums.guru3d.com/data/avatars/m/186/186805.jpg
Didn't you guys see that PCIE4.0 3DMark benchmark???? 69% better performance over a 2080Ti???? That can't be right, just from a bandwidth upgrade? Going from PCIE2.0 to PCIE3.0 wasn't much different, defo not 69%!??
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
CPC_RedDawn:

Didn't you guys see that PCIE4.0 3DMark benchmark???? 69% better performance over a 2080Ti???? That can't be right, just from a bandwidth upgrade? Going from PCIE2.0 to PCIE3.0 wasn't much different, defo not 69%!??
It's a benchmark designed to saturate the pci-e bandwidth... there aren't going to be any games in the next 10 years that come close to doing that. There have been synthetic examples of this with prior PCI-E specs.
https://forums.guru3d.com/data/avatars/m/186/186805.jpg
Denial:

It's a benchmark designed to saturate the pci-e bandwidth... there aren't going to be any games in the next 10 years that come close to doing that. There have been synthetic examples of this with prior PCI-E specs.
Yea I guessed as much, wish companies would just stop doing benchmarks that suit them. As soon as I saw Strange Brigade I was like "come on!!" with it being massively AMD favoured. However, is this yet another AMD secret sauce? Maybe these chips will age like that good old AMD fine wine!?? :P
https://forums.guru3d.com/data/avatars/m/260/260828.jpg
CPC_RedDawn:

Yea I guessed as much, wish companies would just stop doing benchmarks that suit them. As soon as I saw Strange Brigade I was like "come on!!" with it being massively AMD favoured. However, is this yet another AMD secret sauce? Maybe these chips will age like that good old AMD fine wine!?? 😛
Being their first GPU on this architecture i expect some improvements with future drivers even more for the ports of next gen games
https://forums.guru3d.com/data/avatars/m/54/54834.jpg
CPC_RedDawn:

Didn't you guys see that PCIE4.0 3DMark benchmark???? 69% better performance over a 2080Ti???? That can't be right, just from a bandwidth upgrade? Going from PCIE2.0 to PCIE3.0 wasn't much different, defo not 69%!??
They could be testing 2080Ti GDDR6 vs Navi HBM for bandwidth.
https://forums.guru3d.com/data/avatars/m/271/271684.jpg
UZ7:

They could be testing 2080Ti GDDR6 vs Navi HBM for bandwidth.
As far as I remember HBM was never mentioned for Navi, only GDDR6. Also the chip Lisa is holding doesn't appear to have HBM packages. Maybe we'll get Navi with HBM down the line, but it seems like the first run will be GDDR6.
data/avatar/default/avatar33.webp
The new architecture will be interesting to learn about and the efficiency sounds interesting if it can deliver on watts and temps. Always the big question is to be bang per buck. I'm guessing bargains are a thing of the past now at will all be positioning against nvidia. Is this going to be GDDR6 and HBM depending on the model hierarchy?
https://forums.guru3d.com/data/avatars/m/275/275145.jpg
Comparing the card on the best case scenario Strange Brigade... so, in general it probably means it will fall somewhere between RTX 2060 and RTX 2070...
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
kings:

Comparing the card on the best case scenario Strange Brigade... so, in general it probably means it will fall somewhere between RTX 2060 and RTX 2070...
They did run it on 1440p. Hilbert measured that Radeon VII has just 7.5% advantage over RTX 2080 there. R-VII has on that resolution advantage of: 10% in BF5; -2.4% in SoTR; 5.4% in DX:MD; -3.8% in SW:BF2; -4.8% in FC5; -8.5% in F1(2018); -17.7% in Destiny2; -16.2% in TW3; -14% KC:Deliverance; -17.7% in GTA5; -14% in SoW; If you look at it, you can see pattern of quite equal performance in shader heavy/technologically modern games. Why do you think that Radeon VII which is actually much stronger in pure shader brute force loses to RTX 2080 in some titles so badly? it is due to geometry processing and rasterization limitations. While people had till today trouble understanding what Navi is. it would be good idea to think about AMD improving their two biggest weak points. That means one could expect that shown GPU will outperform RTX 2070 on 1080p by higher margin as geometry performance is less important on 4K. And Shading power is less important on lower resolution too. From simple math I did in other thread, this shown GPU may as well be 2560SP chip. In that case check this. RX-580 is total underdog in between modern GPUs in terms of rasterization. It has somewhat decent geometry processing. But it seems that AMD may have doubled front-end and back-end which would put such GPU in those two categories at around 2.4 times performance if we include move from around 1400MHz to 1700MHz range. That would put rasterization at around 2060 level and geometry 20% above 2080. This could explain 2070 performance shown. And this means that at lower resolutions, nVidia would no longer keep its advantage. (Apparently shading would not be the best as AMD claims 25% improvement and 2560 is 11% above 2306 + 20% clock... 1.66x shading power at best above RX-580.) While they did show 1440p comparison, I think they did hold back 1080p performance on purpose as that may bring some wows on E3. But in the end, price point and power draw will be decisive factor. And judging by Zen2 pricing, I think we are not going to be disappointed.
data/avatar/default/avatar24.webp
Denial:

It's a benchmark designed to saturate the pci-e bandwidth... there aren't going to be any games in the next 10 years that come close to doing that. There have been synthetic examples of this with prior PCI-E specs.
With single gpus* Pcie 4.0 would very much be welcomed by sli users.
https://forums.guru3d.com/data/avatars/m/273/273838.jpg
Well, I think near RTX 2070 performance, for about 400€ would be pretty good for 1080p/1440p using an 144Hz Freesync monitor. If that's the case, I can see me replacing my RX480. RTX 2070 does have RT cores though, so I think anything above 420€ isn't justified if the Navi card can't deliver that 10% performance bump.
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
I wonder if Hilbert needs to make decisions regarding his general video card testing rig with the PCIe 4.0 appearing in the video cards. Of course he will immediately see the possible need by comparing the same card in the old rig and a new Ryzen one, once he gets his hands on all the new tech.
https://forums.guru3d.com/data/avatars/m/105/105985.jpg
isn't the v64 already faster then the 2070 in that game?
data/avatar/default/avatar39.webp
The naming is obvious reference to the successful HD5000-series, which means Navi is going to excel.
https://forums.guru3d.com/data/avatars/m/156/156348.jpg
Kaarme:

I wonder if Hilbert needs to make decisions regarding his general video card testing rig with the PCIe 4.0 appearing in the video cards. Of course he will immediately see the possible need by comparing the same card in the old rig and a new Ryzen one, once he gets his hands on all the new tech.
I don't think PCI-E 4 will give any advantage (it should not anyway). But yeah it will need to be tested first to be 100% sure.
https://forums.guru3d.com/data/avatars/m/156/156348.jpg
cowie:

isn't the v64 already faster then the 2070 in that game?
To be fair the 1080 is just a few fps away from a 2070 in that title at 2k. Enough that it's almost within the error margin (6%). The 2070 is not an impressive card at all. I usually upgrade to a new 70 card every generation and sell the old one before it's losing all its value. It's the first generation i skip since the 6xx series. I'm waiting for a price drop on the 2080 cause the 2070 is just not worth it.
data/avatar/default/avatar39.webp
CPC_RedDawn:

Didn't you guys see that PCIE4.0 3DMark benchmark???? 69% better performance over a 2080Ti???? That can't be right, just from a bandwidth upgrade? Going from PCIE2.0 to PCIE3.0 wasn't much different, defo not 69%!??
PS5 says Hi! Maybe it has everything to with streaming capability between the nvme to the GPU... We have to wait and see.