AMD Ryzen 5 5600 review
PowerColor RX 6650 XT Hellhound White review
FSP Hydro PTM Pro (1200W PSU) review
ASUS ROG Radeon RX 6750 XT STRIX review
AMD FidelityFX Super Resolution 2.0 - preview
Sapphire Radeon RX 6650 XT Nitro+ review
Sapphire Radeon RX 6950 XT Sapphire Nitro+ Pure review
Sapphire Radeon RX 6750 XT Nitro+ review
MSI Radeon RX 6950 XT Gaming X TRIO review
MSI Radeon RX 6750 XT Gaming X TRIO review
AMD Radeon RX 6700 XT (reference) review




Priced at $479 USD, AMD released their 'mainstream to high-end Radeon RX 6700 XT. A product that is to battle with the RTX 3060 Ti and 3070 from team green. Armed with 12GB of graphics memory, will it offer enough performance for a graphics card with such a massive pricetag?
Read article
Advertisement
« Radeon Series RX 6700 XT preview & analysis · AMD Radeon RX 6700 XT (reference) review
· ASRock Z590 PG Velocita review »
pages « < 43 44 45 46
Fox2232
Senior Member
Posts: 11809
Senior Member
Posts: 11809
Posted on: 03/29/2021 11:57 AM
this is different kind of fish.how good or bad 6/6 is another matter.the question is how much performance hit there is across a stack of cpus,and it's a question hub have not answered.maybe it gave other tech journalists a clue to check that out tho.
6/6 was never an option for me,neither was 8/8 frankly.I only got 6/12 cause it was cheap at the time but the plan was to get 8/16 from the very beginning once they drop in price.going with intel turned out better than amd,i paid less for 10500 than 3600 and got a faster cpu,same for 10700f vs 3700x.
I'd like an example of current midrange tested,eg. 10400.
and let's not kid ourselves,a review that would find no difference would earn hub very little clicks,and one that would put nvidia in any way over amd would straight up fire up their comment section and make tremendous damage.YT channels are not like tech sites.If you wanna make money you gotta find the right people to subscribe with content suited for them.That's why when I watch a video on human evolution the next thing they're suggesting I watch is not the genesis to show me all kinds of different perspectives.
Possibility of not finding difference was meant for other sites. Because article/video which did find difference is already out there. So moment someone disputes it on one or more HW configurations, there is controversy. (Worth a lot of clicks for both sides.)
And as of AMD vs intel CPUs, it is not really relevant here. If you can get better CPU from intel for your use case, great for you.
But this AMD vs nVidia effect been shown on both intel/AMD CPUs.
And in places it likely shows 1/2 AVX2 performance on Zen1 Ryzen 5 1400 which manages to tank badly.
For testing, I would go different route. 8C/16T CPU from AMD and intel. Would set baseline on stable 4GHz for both. Then exaggerated problem by clocking them on 3/3.5GHz. And "fixing" it by going to 4.5GHz with both. And then 5GHz on all cores, which is benchmarking standard for tech sites.
That way they would use same setup to demonstrate difference in between tech sites testing and Average Joe's PC.
And then, it is not problem to disable cores or SMT to show effects of each. Maybe nVidia's problem is not completely in core count, but in SMT optimizations which lose its effect once it is not available.
(Can't really say till there is proper side by side testing. They pointed shotgun in general direction of problem and did hit something. But for now, it is not clearly defined. And effect of many variables remains unclear.)
- - - -
Years ago when I had 2500K and Fury X, there was talk about bottleneck and that some games exhibit it a lot and some don't. AvP 2010 was one which did not really care about CPU. I did clock 2500K from 4.5GHz down to 2GHz in 500MHz steps, and last test I did on 1.6GHz.
But with many games, it was not good anymore, 4C/4T became good enough for 60fps gaming, and today in some cases not even that. I did move to 2700X because Vermintide 2 was stuttering like hell and required fps limiter to be set quite low.
But that was CPU bottleneck due to Game requirements. And thing is that issue was due to Game threads using all available CPU resources and choking everything (GPU driver included) in background.
I do wonder if Vermintide 2's situation would be worse with nVidia's GPU. But game's benchmark has some randomness due to AI. (But user can set "worker threads" from 1 to number of logical_cores-2.)
this is different kind of fish.how good or bad 6/6 is another matter.the question is how much performance hit there is across a stack of cpus,and it's a question hub have not answered.maybe it gave other tech journalists a clue to check that out tho.
6/6 was never an option for me,neither was 8/8 frankly.I only got 6/12 cause it was cheap at the time but the plan was to get 8/16 from the very beginning once they drop in price.going with intel turned out better than amd,i paid less for 10500 than 3600 and got a faster cpu,same for 10700f vs 3700x.
I'd like an example of current midrange tested,eg. 10400.
and let's not kid ourselves,a review that would find no difference would earn hub very little clicks,and one that would put nvidia in any way over amd would straight up fire up their comment section and make tremendous damage.YT channels are not like tech sites.If you wanna make money you gotta find the right people to subscribe with content suited for them.That's why when I watch a video on human evolution the next thing they're suggesting I watch is not the genesis to show me all kinds of different perspectives.
Possibility of not finding difference was meant for other sites. Because article/video which did find difference is already out there. So moment someone disputes it on one or more HW configurations, there is controversy. (Worth a lot of clicks for both sides.)
And as of AMD vs intel CPUs, it is not really relevant here. If you can get better CPU from intel for your use case, great for you.
But this AMD vs nVidia effect been shown on both intel/AMD CPUs.
And in places it likely shows 1/2 AVX2 performance on Zen1 Ryzen 5 1400 which manages to tank badly.
For testing, I would go different route. 8C/16T CPU from AMD and intel. Would set baseline on stable 4GHz for both. Then exaggerated problem by clocking them on 3/3.5GHz. And "fixing" it by going to 4.5GHz with both. And then 5GHz on all cores, which is benchmarking standard for tech sites.
That way they would use same setup to demonstrate difference in between tech sites testing and Average Joe's PC.
And then, it is not problem to disable cores or SMT to show effects of each. Maybe nVidia's problem is not completely in core count, but in SMT optimizations which lose its effect once it is not available.
(Can't really say till there is proper side by side testing. They pointed shotgun in general direction of problem and did hit something. But for now, it is not clearly defined. And effect of many variables remains unclear.)
- - - -
Years ago when I had 2500K and Fury X, there was talk about bottleneck and that some games exhibit it a lot and some don't. AvP 2010 was one which did not really care about CPU. I did clock 2500K from 4.5GHz down to 2GHz in 500MHz steps, and last test I did on 1.6GHz.
But with many games, it was not good anymore, 4C/4T became good enough for 60fps gaming, and today in some cases not even that. I did move to 2700X because Vermintide 2 was stuttering like hell and required fps limiter to be set quite low.
But that was CPU bottleneck due to Game requirements. And thing is that issue was due to Game threads using all available CPU resources and choking everything (GPU driver included) in background.
I do wonder if Vermintide 2's situation would be worse with nVidia's GPU. But game's benchmark has some randomness due to AI. (But user can set "worker threads" from 1 to number of logical_cores-2.)
PrMinisterGR
Senior Member
Posts: 7975
Senior Member
Posts: 7975
Posted on: 03/29/2021 12:08 PM
Inb4 Nvidia disables it and then performance sucks and then everyone is b*tching even worse.
And still NOBODY has tested Threaded Optimization.
Inb4 Nvidia disables it and then performance sucks and then everyone is b*tching even worse.
And still NOBODY has tested Threaded Optimization.
cucaulay malkin
Senior Member
Posts: 4568
Senior Member
Posts: 4568
Posted on: 03/29/2021 12:34 PM
Possibility of not finding difference was meant for other sites. Because article/video which did find difference is already out there. So moment someone disputes it on one or more HW configurations, there is controversy. (Worth a lot of clicks for both sides.)
And as of AMD vs intel CPUs, it is not really relevant here. If you can get better CPU from intel for your use case, great for you.
But this AMD vs nVidia effect been shown on both intel/AMD CPUs.
And in places it likely shows 1/2 AVX2 performance on Zen1 Ryzen 5 1400 which manages to tank badly.
For testing, I would go different route. 8C/16T CPU from AMD and intel. Would set baseline on stable 4GHz for both. Then exaggerated problem by clocking them on 3/3.5GHz. And "fixing" it by going to 4.5GHz with both. And then 5GHz on all cores, which is benchmarking standard for tech sites.
That way they would use same setup to demonstrate difference in between tech sites testing and Average Joe's PC.
And then, it is not problem to disable cores or SMT to show effects of each. Maybe nVidia's problem is not completely in core count, but in SMT optimizations which lose its effect once it is not available.
(Can't really say till there is proper side by side testing. They pointed shotgun in general direction of problem and did hit something. But for now, it is not clearly defined. And effect of many variables remains unclear.)
- - - -
Years ago when I had 2500K and Fury X, there was talk about bottleneck and that some games exhibit it a lot and some don't. AvP 2010 was one which did not really care about CPU. I did clock 2500K from 4.5GHz down to 2GHz in 500MHz steps, and last test I did on 1.6GHz.
But with many games, it was not good anymore, 4C/4T became good enough for 60fps gaming, and today in some cases not even that. I did move to 2700X because Vermintide 2 was stuttering like hell and required fps limiter to be set quite low.
But that was CPU bottleneck due to Game requirements. And thing is that issue was due to Game threads using all available CPU resources and choking everything (GPU driver included) in background.
I do wonder if Vermintide 2's situation would be worse with nVidia's GPU. But game's benchmark has some randomness due to AI. (But user can set "worker threads" from 1 to number of logical_cores-2.)
same for me,3570k oc 4.7G,8gigs of dc 2200 c11 ram and a r9 290 trix running 1150mz
far cry 4 was unplayable as soon as you came near any outpost or settlement.out in the open - smooth sailing.there wasn't an option to enable dynamic vsync either,and hacks didn't work.
and mind you 3570k was one of the fastest gaming cpus back in 2014.it wasn't an entry level one like stock 9400s or ryzen 1400s
I got so pissed that I splurged on gtx980 and never looked back.it looks like it may go a full circle now with ampere,but first I gotta get that 3080 im waiting for.
Possibility of not finding difference was meant for other sites. Because article/video which did find difference is already out there. So moment someone disputes it on one or more HW configurations, there is controversy. (Worth a lot of clicks for both sides.)
And as of AMD vs intel CPUs, it is not really relevant here. If you can get better CPU from intel for your use case, great for you.
But this AMD vs nVidia effect been shown on both intel/AMD CPUs.
And in places it likely shows 1/2 AVX2 performance on Zen1 Ryzen 5 1400 which manages to tank badly.
For testing, I would go different route. 8C/16T CPU from AMD and intel. Would set baseline on stable 4GHz for both. Then exaggerated problem by clocking them on 3/3.5GHz. And "fixing" it by going to 4.5GHz with both. And then 5GHz on all cores, which is benchmarking standard for tech sites.
That way they would use same setup to demonstrate difference in between tech sites testing and Average Joe's PC.
And then, it is not problem to disable cores or SMT to show effects of each. Maybe nVidia's problem is not completely in core count, but in SMT optimizations which lose its effect once it is not available.
(Can't really say till there is proper side by side testing. They pointed shotgun in general direction of problem and did hit something. But for now, it is not clearly defined. And effect of many variables remains unclear.)
- - - -
Years ago when I had 2500K and Fury X, there was talk about bottleneck and that some games exhibit it a lot and some don't. AvP 2010 was one which did not really care about CPU. I did clock 2500K from 4.5GHz down to 2GHz in 500MHz steps, and last test I did on 1.6GHz.
But with many games, it was not good anymore, 4C/4T became good enough for 60fps gaming, and today in some cases not even that. I did move to 2700X because Vermintide 2 was stuttering like hell and required fps limiter to be set quite low.
But that was CPU bottleneck due to Game requirements. And thing is that issue was due to Game threads using all available CPU resources and choking everything (GPU driver included) in background.
I do wonder if Vermintide 2's situation would be worse with nVidia's GPU. But game's benchmark has some randomness due to AI. (But user can set "worker threads" from 1 to number of logical_cores-2.)
same for me,3570k oc 4.7G,8gigs of dc 2200 c11 ram and a r9 290 trix running 1150mz
far cry 4 was unplayable as soon as you came near any outpost or settlement.out in the open - smooth sailing.there wasn't an option to enable dynamic vsync either,and hacks didn't work.
and mind you 3570k was one of the fastest gaming cpus back in 2014.it wasn't an entry level one like stock 9400s or ryzen 1400s
I got so pissed that I splurged on gtx980 and never looked back.it looks like it may go a full circle now with ampere,but first I gotta get that 3080 im waiting for.
Fox2232
Senior Member
Posts: 11809
Senior Member
Posts: 11809
Posted on: 03/29/2021 12:39 PM
same for me,3570k oc 4.7G,8gigs of dc 2200 c11 ram and a r9 290 trix running 1150mz
far cry 4 was unplayable as soon as you came near any outpost or settlement.out in the open - smooth sailing.
and mind you 3570k was one of the fastest gaming cpus back in 2014.it wasn't an entry level one like stock 9400s or ryzen 1400s
Yeah, Ivy had quite a few advantages over Sandy. Faster memories did help a lot and PCIe 3.0 did make some difference too.
IPC wise with same memory configuration, they would be same. But improved IMC made big difference.
Before Sandy, I had i7-720qm. It was not good time for HT. Even people with desktop i7s disabled it to get higher OC. But threads become important for gaming in time.
same for me,3570k oc 4.7G,8gigs of dc 2200 c11 ram and a r9 290 trix running 1150mz
far cry 4 was unplayable as soon as you came near any outpost or settlement.out in the open - smooth sailing.
and mind you 3570k was one of the fastest gaming cpus back in 2014.it wasn't an entry level one like stock 9400s or ryzen 1400s
Yeah, Ivy had quite a few advantages over Sandy. Faster memories did help a lot and PCIe 3.0 did make some difference too.
IPC wise with same memory configuration, they would be same. But improved IMC made big difference.
Before Sandy, I had i7-720qm. It was not good time for HT. Even people with desktop i7s disabled it to get higher OC. But threads become important for gaming in time.
pages « < 43 44 45 46
Click here to post a comment for this article on the message forum.
Senior Member
Posts: 11809
As a quick aside, (& related), I'm in a queue for a 3080 GPU, and I'd be planning to use it with my 6700K which is at 4.69GHz with 16GB (2 sticks) DDR3 RAM at 3233Mhz (14-15-15-32-240-1T, dual rank). I was/am concerned that I'll get less fps in CPU limited games with my prospective 3080 vs my existing GTX 1070, what do you reckon? (I included my RAM details as dual rank RAM combined with the quite tight timings I have has proven to boost CPU performance in games.) I've not looked into enough detail to know the answer to this question, but I can see you have investigated this topic and figured I'd get your viewpoint. I've got a 180Hz G-sync monitor, so in my case I'm aiming for 171fps/Hz stable in games - for example I can keep that stable pretty much constantly in BF1 except for on Amiens map.
(As it stands I probably won't ever get a 3080 GPU as I believe my vendor will provide a refund rather than live with the loss they'd make given I bought the GPU slightly above MSRP).
Not possible to get lower performance from stronger GPU when comparing two nVidia's GPUs. At worst you'll see no improvement in some of games.
Entire thing is that when people have weaker CPUs, they are better off having AMD's GPU. (At least RDNA ones.)