ASRock Radeon RX 6750 XT Phantom review

Graphics cards 1049 Page 1 of 1 Published by

Click here to post a comment for ASRock Radeon RX 6750 XT Phantom review on our message forum
data/avatar/default/avatar11.webp
That's right, the card starts to be very interesting at this price level, good performance, quiet and looking pretty solid all around for 1440p gaming, Might be on my shopping list soon as I don't think the next gen will come at the prices any time this year.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Lukart:

That's right, the card starts to be very interesting at this price level, good performance, quiet and looking pretty solid all around for 1440p gaming, Might be on my shopping list soon as I don't think the next gen will come at the prices any time this year.
I would argue it's completely uninteresting at this price level. It's basically just a finely-tuned 6700 XT (which was already too expensive for what it was) for a hefty $75 increase. I'd rather get a PS5. If a GPU by itself costs as much or more than a console, I expect it to offer a significantly better experience. The 6750 XT is better but not enough to justify its price IMO.
https://forums.guru3d.com/data/avatars/m/271/271560.jpg
honestly at this price you can get a 6800 (nonXT) and be better off all the way around. in some markets the 6800XT is less than the 6800, so if that's the case even better as now you have 144Hz @ 1440p
data/avatar/default/avatar30.webp
2019 bought the 5700XT AE, chuck a waterblock on it and keeping it undervolted, only RAM is overclocked mildly. 2022 a $500ish 6750XT (heavily tuned) is barely 20% faster than my 5700XT from 3 years ago at 2560x1440. At the same time the 5700XT is idling at 8-10W and the 5900X at 4.8-5W. The 6750XT needs 17W while idling! Full blast one card needs 189W the other 268W, 42% more power for 20% more FPS. Why I think something is wrong here?
https://forums.guru3d.com/data/avatars/m/201/201426.jpg
schmidtbag:

I would argue it's completely uninteresting at this price level. It's basically just a finely-tuned 6700 XT (which was already too expensive for what it was) for a hefty $75 increase. I'd rather get a PS5. If a GPU by itself costs as much or more than a console, I expect it to offer a significantly better experience. The 6750 XT is better but not enough to justify its price IMO.
Then buy a PS5 because you were not gonna buy this card anyways. This card is for anyone who is still rocking a 5600xt or lower and didnt buy a 6600xt or 6700xt already. Even if I had a 5700xt, I would buy it.
Fediuld:

2019 bought the 5700XT AE, chuck a waterblock on it and keeping it undervolted, only RAM is overclocked mildly. 2022 a $500ish 6750XT (heavily tuned) is barely 20% faster than my 5700XT from 3 years ago at 2560x1440. At the same time the 5700XT is idling at 8-10W and the 5900X at 4.8-5W. The 6750XT needs 17W while idling! Full blast one card needs 189W the other 268W, 42% more power for 20% more FPS. Why I think something is wrong here?
Coming from a 5700xt to a 6800xt, you are kinda of whining. 6700xt was worth the jump from a 5700xt to it at MSRP. I can tell you right now, no way in hell would I be undervolting with a wateblock. I ran my 5700xt full tilt pulling 250w. Either you live in a shitty place with insanely high electricity cost, or you are obsessed with power consumption numbers. my 5600x@ 4.5 with 1.192v pulls 43w idle and 6800xt stock settings pulls 35w running a 3840x1080 144hz panel an 1920x480 60hz panel. My monitor pulls 90w just to keep things in perspective.
https://forums.guru3d.com/data/avatars/m/238/238382.jpg
Why does the memory bus width keep shrinking with each new generation of video cards?
data/avatar/default/avatar12.webp
Agonist:

Either you live in a shitty place with insanely high electricity cost, or you are obsessed with power consumption numbers.
Whole Europe, except Russia and Belarus, is a shitty place now? 😀 Nah the card performs better somehow.
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
KissSh0t:

Why does the memory bus width keep shrinking with each new generation of video cards?
For saving costs: the holy grail of all business. GDDR capacity per chip keeps growing, as well as the speed. So, you need less chips for the same capacity as before, but due to the increased speed, you may still achieve close to the same total bandwidth (and of course if you use the same amount of chips, you can get more speed). However, Nvidia and AMD have also worked on their compression and other algorithms, allowing the GPU to cope with less bandwidth for the same result. AMD recently started using the extra large cache for the same effect. One physical thing to keep in mind is that the bus width per GDDR chip is 32. So, when you look at the graphics card total memory bus width (such as 128, 192, 256, 384), you can divide it by 32 to see the number of chips it will have. If you have eight chips of 8 Gb, you will have a bus width of 256 bits and a total memory of 8GB. Back in the day when a single chip was only 4 Gb, you'd have needed 16 of such chips to have 8GB of memory, which would have required a 512-bit memory bus for the card. That's expensive and complicated. 8 chips is much for comfortable for the manufacturers, though it only leaves a memory bus of 256 bits.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Agonist:

Then buy a PS5 because you were not gonna buy this card anyways. This card is for anyone who is still rocking a 5600xt or lower and didnt buy a 6600xt or 6700xt already. Even if I had a 5700xt, I would buy it.
You're missing the point: The GPU is a bad value. If someone is rocking a 5600XT, chances are, they couldn't afford or justify something better. Either that or they got it for power/thermal reasons. In any case, such a person will not be buying a GPU like this.
https://forums.guru3d.com/data/avatars/m/254/254725.jpg
Fediuld:

2019 bought the 5700XT AE, chuck a waterblock on it and keeping it undervolted, only RAM is overclocked mildly. 2022 a $500ish 6750XT (heavily tuned) is barely 20% faster than my 5700XT from 3 years ago at 2560x1440. At the same time the 5700XT is idling at 8-10W and the 5900X at 4.8-5W. The 6750XT needs 17W while idling! Full blast one card needs 189W the other 268W, 42% more power for 20% more FPS. Why I think something is wrong here?
Stagnation. The 6x50 refresh has looked pointless to me. The performance increase isn't worthwhile with the price and power consumption that comes with it.
https://forums.guru3d.com/data/avatars/m/273/273955.jpg
Guru3D why don't you use FSR on AMD cards, but only DLSS on Nvidia cards? Nvidia can use FSR 1.0 and 2.0 it is open
https://forums.guru3d.com/data/avatars/m/248/248627.jpg
Bender82:

Guru3D why don't you use FSR on AMD cards, but only DLSS on Nvidia cards? Nvidia can use FSR 1.0 and 2.0 it is open
I use these things on my 2400g and 5600g as now with gpu prices still being stupidly high APU's interest me more and my gtx 1080 still kicking it in my main system
https://forums.guru3d.com/data/avatars/m/248/248627.jpg
got looking at the 6750xt's and theyre actually reasonably priced just ordered mine today (msi's model) and got it for 512usd delivered with taxes (13%) and shipping not a bad price if u ask me and the cheapest 6700xt was only 20 or 30$ cheaper idk why the 6700xt even exsists anymore
https://forums.guru3d.com/data/avatars/m/175/175902.jpg
schmidtbag:

It's basically just a finely-tuned 6700 XT (which was already too expensive for what it was).
Yes and no, in fact it depend of the GPU board maker, most of the 6700XT will never get close to the 6750XT, from what i have seen only sapphire and powercolor (but with updated cooling system). The RDNA2 better love fast OC ram than fast OC GPU... So this is a solution out of the box. Asrock are a bit pricy here, but most maker (MSI, Sapphire, Powercolor...) have not followed the suggested price for the new version and sell them at the same price (thank the ram price drop 🙂 )
https://forums.guru3d.com/data/avatars/m/175/175902.jpg
Bender82:

Guru3D why don't you use FSR on AMD cards, but only DLSS on Nvidia cards? Nvidia can use FSR 1.0 and 2.0 it is open
Maybe because NVidia use DLSS by default (a bit like using freesync when they use Gsync)... Despite i agree that it is more fair to compare on the same basis (even more if open source), it will never happen as GPU maker use their own things 1st...