ASUS Republic of Gamers Announces Strix RX 480

Published by

Click here to post a comment for ASUS Republic of Gamers Announces Strix RX 480 on our message forum
https://forums.guru3d.com/data/avatars/m/31/31371.jpg
To the user worried about memory.... no, the 1060 is fine at what it is trying to do. Basically, you are getting a fancy 970 with a full set of RAM. If you go 480 you have to remember so sacrifice all those little things like gameworks, SGSSAA, and the rest of it. For where you are gaming at with a 1060 you will rarely ever be limited unless you really plan on keeping that card for a LONG time. In that case I wouldn't get either of these cards and I would shell out for a 1070. Simple fact is neither of these cards is all that great.
For a lot people they don't have big income it can take up to 2 years to save up that kind cash and there pushing card to over $400+US not counting S/H so it not good option Right now most people have 3 option, I just go for RX 480 4GB which I have enough save up right now to replace my HD 7870 or could wait in tell Dec time and I should enough for 8GB model or maybe even GTX 1060. To me Gameworks and SGSSAA not a big deal any way
The elephant in the room is that we can talk all day about both cards but you pretty much cannot buy either. The buzz kill is the total and entire lack of availability.
That true they Sold Out GTX 1060 really fast
https://forums.guru3d.com/data/avatars/m/152/152580.jpg
I'm on the same line. Both cards offer good value.
For me FreeSync is additional $100 value, for AMD cards. I don't imagine buying next graphic card without adaptive sync technology.
https://forums.guru3d.com/data/avatars/m/31/31371.jpg
For me FreeSync is additional $100 value, for AMD cards. I don't imagine buying next graphic card without adaptive sync technology.
That true when you factor in cost of new Adaptive Sync Technology Monitor then start seeing really big diff in cost with the G-SYNC $600+ where Free sync 27" 144Hz can be found for $400 and even far less then that with 1080p Resolution and 60Hz Refresh Rate.
data/avatar/default/avatar14.webp
This is purely nVidia's fault. People did not care about power efficiency much. Then Maxwell came as power efficient and nearly everyone started to think that power efficiency means better fps or huge financial savings.
You actually do get better FPS with a more efficient architecture. Let's say the 1060 gets 60fps at 70C while the 480 gets 60fps at 80C, the 1060 can be overclocked until it reaches the 80C threshold and produce a higher framerate than the original 60fps. Alternatively, you can maintain the same fps while producing less heat. This is far more important for my next mini-atx/htpc build. I am planning to buy a mini 1060 card so that I'm exhausting less heat and have greater airflow than if I were using either a Polaris chip or a fullsize card. Lower TDP = less heat to push out of my case, simple as that. While the electricity savings are going to be minimal and that benefit is overblown (unless you game 24x7), there is no doubt that a more efficient chip provides real benefits to the consumer.
data/avatar/default/avatar24.webp
No need to disable boost. Just heat up till ref. 480 hits 1120MHz, then bench. voila Up to +19%. That's the only way to get +19% over ref. 480; to have at least +19% GPU clocks. They are already throwing AMD under the buss mentioning PCIe power issue. As they should. Why shouldn't they show their product in the best light? Instead of being so naive as to put competing product in a great case so that it can maintain 1266MHz: "Here is our awesome new product and it's almost 5% faster than ref." :banana:
https://www.computerbase.de/2016-06/radeon-rx-480-test/5/ Thats a Fractal Design Define R5 with an RX480 after 20 mins, clocks speeds are down to 1160mhz in the games they tested at 1440p. Now i'm not going to get into if you think thats a great case or not, the lans i go to where people would buy $240 cards its would probably be mid-upper end for them. So could Asus engineer that result , absolutely , would they have to try hard to pick the case to do it .. i don't think it would be hard looking at that. Whats telling is Asus's same card for the 1060 (strix) they report about a 6-7% gain , so would they put themselves in a worse light there than with a rx480 strix ? i doubt it - they would use the same methods i would think. Asus is telling us there strix model will gain 10%+ on the 480 vs their 1060 strix is the point.
data/avatar/default/avatar12.webp
Yes The rx 480 has a 256 bit bus to the 192 bit bus of the 1060. That with the extra memory pulls a extra 30 watts so the power usage of these 2 cards is much much closer than it seems right now. Nvidia appears to be more efficient and it likely is but its not as much as it seems because of the memory bus different power requirements. It would be nice to know what that actually is. The vulkan doom benchmark results say to me all i need to here. Amd new architecture is only going to get better and better over time as game devs use more and more of its abilities so its only going to get faster and faster. Right now 8 gig on this rx480 is way to much it will never use it without it being unplayable. I do think however AMD know something we dont hence the 8 gig. Like the playstation 4 has.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
You actually do get better FPS with a more efficient architecture. Let's say the 1060 gets 60fps at 70C while the 480 gets 60fps at 80C, the 1060 can be overclocked until it reaches the 80C threshold and produce a higher framerate than the original 60fps. Alternatively, you can maintain the same fps while producing less heat. This is far more important for my next mini-atx/htpc build. I am planning to buy a mini 1060 card so that I'm exhausting less heat and have greater airflow than if I were using either a Polaris chip or a fullsize card. Lower TDP = less heat to push out of my case, simple as that. While the electricity savings are going to be minimal and that benefit is overblown (unless you game 24x7), there is no doubt that a more efficient chip provides real benefits to the consumer.
I like way you introduce your System configuration, but to your statement. I can make my Fury X eat good 370W and it will still be kept under 50°C while being quieter than your blower kind of FE 1060. And the fun part, Fury X with some voltage tuning eats like 250W max. - - - - But to give you some credit... You have it nearly right, but you think in wrong properties. Temperature is function of generated heat vs cooling capacity. What matters with Power efficiency is maximum performance extracted at 300W standard point where top cards aim at desktop. But in end of the day, many people will run their cards out of specification and not one of us will care. And in past even with nVidia having much more power efficient GPUs. Once things hit top of this 300W limit, there were not so big difference. - - - - And side note, You do not overclock till you reach certain temperature, You overclock till you reach limits of stability. GPUs are not Vishera clocked to 8GHz under LN2, they have other electrical limitations.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
The rx 480 has a 256 bit bus to the 192 bit bus of the 1060. That with the extra memory pulls a extra 30 watts so the power usage of these 2 cards is much much closer than it seems right now. Nvidia appears to be more efficient and it likely is but its not as much as it seems because of the memory bus different power requirements. It would be nice to know what that actually is. The vulkan doom benchmark results say to me all i need to here. Amd new architecture is only going to get better and better over time as game devs use more and more of its abilities so its only going to get faster and faster. Right now 8 gig on this rx480 is way to much it will never use it without it being unplayable. I do think however AMD know something we dont hence the 8 gig. Like the playstation 4 has.
Yes, they know that several manufacturers are discontinuing 0.5GB/chip memories 🙂 And that 1GB per chip memories cost nearly same. At this time, nVidia should not be selling cards with "only" 3GB vram. Not because it is insufficient for 1080p for which GTX 1060 is targeted. But because there will be bunch of lazy developers who will see 6/8GB of vram on a lot of GPUs and will not care about better texture management. It is not problem for older 3GB gpus, because they are bit weaker and people are used to reduce some details a bit. But people who will have degraded performance of will have to sacrifice details on 3GB GTX 1060 just because each 0.5GB chip did cost $2 less? (Good thing is that nVidia said that they'll not introduce 3GB version at launch time.) And if they do so later, not many people will buy them as everyone will consider 4GB+ as new standard.
data/avatar/default/avatar31.webp
question Can anyone say for sure what the power difference is between 8 gig 256 bus and 6 gig 192 bit bus. It would be interesting to know this information if anyone has it.
data/avatar/default/avatar27.webp
https://www.computerbase.de/2016-06/radeon-rx-480-test/5/ Thats a Fractal Design Define R5 with an RX480 after 20 mins, clocks speeds are down to 1160mhz in the games they tested at 1440p. Now i'm not going to get into if you think thats a great case or not, the lans i go to where people would buy $240 cards its would probably be mid-upper end for them. So could Asus engineer that result , absolutely , would they have to try hard to pick the case to do it .. i don't think it would be hard looking at that. Whats telling is Asus's same card for the 1060 (strix) they report about a 6-7% gain , so would they put themselves in a worse light there than with a rx480 strix ? i doubt it - they would use the same methods i would think. Asus is telling us there strix model will gain 10%+ on the 480 vs their 1060 strix is the point.
And on the same page you can clearly see the difference of 4% between throttling card and RX 480locked at 1266MHz via increased PT/TT: AMD RX 480 MAX (8GB) https://www.computerbase.de/2016-06/radeon-rx-480-test/5/ If you think that 1330 MHz overclock can gain more than 1330/1266 = 5% over such 1266MHz locked card... I won't try to dissuade you any further. From the same reviewer: https://abload.de/img/screenshot2016-07-210dvu8d.png https://abload.de/img/screenshot2016-07-210mouv4.png
data/avatar/default/avatar06.webp
And on the same page you can clearly see the difference of 4% between throttling card and RX 480locked at 1266MHz via increased PT/TT: AMD RX 480 MAX (8GB) https://www.computerbase.de/2016-06/radeon-rx-480-test/5/ If you think that 1330 MHz overclock can gain more than 1330/1266 = 5% over such 1266MHz locked card... I won't try to dissuade you any further. From the same reviewer: Jeez Reading is important ! Asus were talking 1440p and 4k- you pick 1080p to back up your claims ?? .. 6 % is the average gains eliminating the throttling in 1440p according to the same reviewer. Ashes and ROTR see 8% at 1440p .. The witcher 3 is 10% ! in 1440p vs ref 480 to power tweaked 480 etc just to 1266 boost. So Asus would obviously pick the best case to show this in , that review did not test Firsetrike Extreme (asus benchmarked ) , this might have showed 11 % etc. Add in the 5% OC boost to 1330 and you are pretty close to the 15% that Asus quote for 1440p gains - best case obviously . Just relooked at it they did test fire strike ultra (4k) and the gain was 11% the gain for Firestrike (1080p) was 8 % , so i put Firestrike extreme somewhere in the middle say 9.5-10% at a guess. So yeah if Asus's claim is on the high side it might be by 1-2 % , but its pretty close from what i can see.
https://forums.guru3d.com/data/avatars/m/164/164033.jpg
And on the same page you can clearly see the difference of 4% between throttling card and RX 480locked at 1266MHz via increased PT/TT: AMD RX 480 MAX (8GB) https://www.computerbase.de/2016-06/radeon-rx-480-test/5/ If you think that 1330 MHz overclock can gain more than 1330/1266 = 5% over such 1266MHz locked card... I won't try to dissuade you any further. From the same reviewer: https://abload.de/img/screenshot2016-07-210dvu8d.png https://abload.de/img/screenshot2016-07-210mouv4.png
When not throttling the 480 and 1060 are 1% from each other not that bad.
data/avatar/default/avatar04.webp
When not throttling the 480 and 1060 are 1% from each other not that bad.
Nah...to get there you should compare "RX 480 Max" with "GTX 1060 Founders Edition Max" (2nd pic) 480 loses 4% at default PT/TT compared to Max 1060 loses 3% GTX 1060 throttles only 1%... less than 480
data/avatar/default/avatar24.webp
Jeez Reading is important ! Asus were talking 1440p and 4k- you pick 1080p to back up your claims ?? .. 6 % is the average gains eliminating the throttling in 1440p according to the same reviewer.
OK 6%. 6% + 5% = 11% 11% faster than ref. 480 And only for those who don't know how to or refuse to increase PT/TT on their ref. 480. Else - 1330/1266 = 5% increase 🙂
data/avatar/default/avatar35.webp
Don't forget they might have also increased memory speed. Or did they already announce mem speeds?
https://forums.guru3d.com/data/avatars/m/227/227853.jpg
This is purely nVidia's fault. People did not care about power efficiency much. Then Maxwell came as power efficient and nearly everyone started to think that power efficiency means better fps or huge financial savings.
Efficiency matters at the top-end. Desktop Maxwell stopped at 250W and surpassed its competition by quite a decent amount; think about what that means, just for a second. Nvidia's fault for what? For making an efficient architecture? Dammit Nvidia, how could you! Until Maxwell the themal envelopes have been relatively similar (although the 290x was power hungry way beyond AMD's 250W envelope, but hey; let's ignore that small fact). You can see Maxwell's fruit right now in Pascal, the 1070 needs about the same power as the 480 yet it kills it performance-wise in every possible way and by a very wide gap. Now it's suddenly a problem how Nvidia decided to make an efficient architecture yet we can clearly see it was a smart move? Give me a break man. Maxwell is likely one of the reasons why Pascal was able to launch in such short order seeing as there are already 3 Pascal cards on the market compared to 1 Polaris card. Nvidia didn't have to do squat, they just solved 1 or 2 of Maxwell's deficiencies, shrunk the whole thing and clocked it like crazy.
Doubling transistors is not 100% necessity. There are parts of GPU which will not be doubled. Like PCIe IO, VCE, Scheduler, internal bus, ... +RX-480 is not really that power hungry on GPU side, solid part of consumption is blower and gddr5.
Are you being serious right now? A fan. You're blaming the 480's power consumption on a fan? Toppest of keks this is the most ludicrous thing I've heard lately. I also remember an article stating how Raja Koduri said that the VRAM on the 480 takes maybe 30W IIRC correctly? That sounds pretty ok to me, I can't find the article however but you're seriously overexaggerating how much VRAM + A FAN takes from the overall TDP. HBM doesn't cut as much power as you think it does, in case that's where you were going with this.
And on top of this improvement, if ASUS's statement is solid and their 5% OC + proper cooling/power delivery brings 15~19% improvement in performance over stock card...
On this I can agree, if ASUS manages to do something like that it will outright kill the 1060 once and for all. However keep in mind, marketing is the name of the game. ASUS might simply be cherry picking situations or outright spewing bull**** just to attract attention. Like AMD did with the 480, said 2 of them are better than a 1080 but it turned out it's only in AOTS. I might not be a fan of AMD's vaporware tactics (not like I'm a fan of Nvidia's messed up business practices either) but AMD really needs to sell a lot of cards this generation. I'm particularly scared for them because the 480 doesn't look like a spawn of Fiji to me considering the rather unimpressive TDP.
Back in the day, I have seen Fiji as more capable than Maxwell per transistor investment and same clock. But AMD as always picks denser manufacturing, higher leakage, and lower stable clock. Improvement from Maxwell to Pascal is much smaller than Fiji to Polaris. And as such AMD/GloFo has to fix that clock and leakage issue... and we have big improvement maximum achievable performance per single GPU.
What improvement are we talking about? Performance? Because if so, you're saying nonsense. The 1070 is almost twice as fast as the 970, the 1080 is almost twice as fast as the 980 while the 480 is almost twice as fast as the 380. I see a similar increase. Man I have no idea what's up with these posts of yours.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
Efficiency matters at the top-end. Desktop Maxwell stopped at 250W and surpassed its competition by quite a decent amount; think about what that means, just for a second. Nvidia's fault for what? For making an efficient architecture? Dammit Nvidia, how could you! Until Maxwell the themal envelopes have been relatively similar (although the 290x was power hungry way beyond AMD's 250W envelope, but hey; let's ignore that small fact). You can see Maxwell's fruit right now in Pascal, the 1070 needs about the same power as the 480 yet it kills it performance-wise in every possible way and by a very wide gap. Now it's suddenly a problem how Nvidia decided to make an efficient architecture yet we can clearly see it was a smart move? Give me a break man. Maxwell is likely one of the reasons why Pascal was able to launch in such short order seeing as there are already 3 Pascal cards on the market compared to 1 Polaris card. Nvidia didn't have to do squat, they just solved 1 or 2 of Maxwell's deficiencies, shrunk the whole thing and clocked it like crazy.
Maybe reading #30 will help a little.
Are you being serious right now? A fan. You're blaming the 480's power consumption on a fan? Toppest of keks this is the most ludicrous thing I've heard lately. I also remember an article stating how Raja Koduri said that the VRAM on the 480 takes maybe 30W IIRC correctly? That sounds pretty ok to me, I can't find the article however but you're seriously overexaggerating how much VRAM + A FAN takes from the overall TDP. HBM doesn't cut as much power as you think it does, in case that's where you were going with this.
[spoiler]http://i66.tinypic.com/wtzoyf.jpg[/spoiler]
What improvement are we talking about? Performance? Because if so, you're saying nonsense. The 1070 is almost twice as fast as the 970, the 1080 is almost twice as fast as the 980 while the 480 is almost twice as fast as the 380. I see a similar increase.
Same improvement as I always talk about when I mention: "per transistor investment". It is improvement per building block of ROP, TMU, SP, ...
Man I have no idea what's up with these posts of yours.
It is good to see you back, but you do not have to go into: "I do not understand place." Because I am not always in mood to explain things I am sure people around should understand. And I am damn sure you would have no problem to extrapolate and understand all those things you did "not" understand while you made quoted post.
https://forums.guru3d.com/data/avatars/m/231/231931.jpg
[spoiler]http://i66.tinypic.com/wtzoyf.jpg[/spoiler]
Uhm for one that's for a 7970. 2. It uses 20.4 watts at max duty cycle, under normal operation it won't use half that.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
Uhm for one that's for a 7970. 2. It uses 20.4 watts at max duty cycle, under normal operation it won't use half that.
Yes, that's HD 7970, and no, it runs under load at 70~80% while running stock clocks. It is damn loud blower. And once OC is applied, those funny pictures with rocket engines replaced by cards with blowers are quite close, as sound it makes while it reaches 100% is awful. Most of blowers are much stronger than regular fans. Even triple fan designs spend less energy on airflow than single blowers. But that's due to efficiency of those heatsinks under triple fan solution.