AMD Polaris 10 Sample Spotted at 1.27 GHz clock frequency

Published by

Click here to post a comment for AMD Polaris 10 Sample Spotted at 1.27 GHz clock frequency on our message forum
data/avatar/default/avatar19.webp
A Polaris 10 2560 Shader 1.3GHz+ card at $250-300 would kill the 1070.
I wouldn't be surprised if the 1070 would beat it when it comes to energy efficiency.
data/avatar/default/avatar07.webp
I really don't understand this obsession with efficiency. If you own a server farm, sure. But individual home gamers? Who cares?
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
I wouldn't be surprised if the 1070 would beat it when it comes to energy efficiency.
If that part existed, I wouldn't care if it used 150W or 190W.
https://forums.guru3d.com/data/avatars/m/254/254725.jpg
I really don't understand this obsession with efficiency. If you own a server farm, sure. But individual home gamers? Who cares?
It's a thing now. I won't complain with lower power usage but, as long as it's not crazy high for a single card I don't care much.
data/avatar/default/avatar01.webp
A Polaris 10 2560 Shader 1.3GHz+ card at $250-300 would kill the 1070.
What makes you think that? They are completely different architectures. I mean a Fury X has 4096 cores and a 980Ti has 2816 and the 980Ti is the faster card overall, at stock clocks even (which are relatively close). What makes you think that AMD has now achieved parity core for core with Nvidia this Gen.? And on top of that the GTX1070 has 1920 cores and Polaris 10 has 2304 cores. If we look at last gen which shows the Fury X having 45% more cores then the 980Ti and the clocks being very similar (1075MHz for 980Ti, 1050MHz for Fury X) and the cards being relatively performance neutral (the 980Ti is faster but it is very close...). And now Polaris 10 has only 20% more cores then the GTX1070 and the GTX1070 has a 32.5% core speed advantage over Polaris 10 (so Polaris 10 has half the core advantage the Fury X had over the 980Ti and gets soundly beat in Core frequency) then it doesn't look very good for Polaris 10 vs. GTX1070... We will see eventually but I certainly wouldn't get my hopes up for Polaris being able to compete with Pascal.
data/avatar/default/avatar31.webp
Never mind.
data/avatar/default/avatar22.webp
Same reason with the GTX1080, they reduced the IPC compaired to Maxwell so they could increase clock speeds, much like the AMD FX Processor. Some times the trade off is good sometimes not so much.
Ok ... but Nvidia did not reduced IPC compared with Maxwell GTX1070 boost 1,683 MHz 6.45 TFLOPS Cuda cores 1,920 GTX980 boost 1,216 MHz 4.61 TFLOPS Cuda cores 2048 If we set GTX1070 at 1216 Mhz we will have exactly 4.66 TFLOPS . So Pascal is an improvement over Maxwell in the IPC , because we have the same perfomance but with less cores. But the increase in the IPC from Maxwell to Pascal is so litlle , it's not worth even to be mentioned . Shrink Maxwell to 16nm , clock it as high as Pascal and probably you will not see too much differences between them. Unlike Nvidia, AMD told us that Polais have gain noticeably improvements in the IPC , so this time I can expect AMD to be competitive, but only if they can clock their GPU's around 1,4 Ghz
https://forums.guru3d.com/data/avatars/m/245/245459.jpg
What makes you think that? They are completely different architectures. I mean a Fury X has 4096 cores and a 980Ti has 2816 and the 980Ti is the faster card overall, at stock clocks even (which are relatively close). What makes you think that AMD has now achieved parity core for core with Nvidia this Gen.? And on top of that the GTX1070 has 1920 cores and Polaris 10 has 2304 cores. If we look at last gen which shows the Fury X having 45% more cores then the 980Ti and the clocks being very similar (1075MHz for 980Ti, 1050MHz for Fury X) and the cards being relatively performance neutral (the 980Ti is faster but it is very close...). And now Polaris 10 has only 20% more cores then the GTX1070 and the GTX1070 has a 32.5% core speed advantage over Polaris 10 (so Polaris 10 has half the core advantage the Fury X had over the 980Ti and gets soundly beat in Core frequency) then it doesn't look very good for Polaris 10 vs. GTX1070... We will see eventually but I certainly wouldn't get my hopes up for Polaris being able to compete with Pascal.
Very good point! It's not looking good for AMD when comparing Polaris 10 & GTX 1070.
https://forums.guru3d.com/data/avatars/m/164/164033.jpg
What makes you think that? They are completely different architectures. I mean a Fury X has 4096 cores and a 980Ti has 2816 and the 980Ti is the faster card overall, at stock clocks even (which are relatively close). What makes you think that AMD has now achieved parity core for core with Nvidia this Gen.? And on top of that the GTX1070 has 1920 cores and Polaris 10 has 2304 cores. If we look at last gen which shows the Fury X having 45% more cores then the 980Ti and the clocks being very similar (1075MHz for 980Ti, 1050MHz for Fury X) and the cards being relatively performance neutral (the 980Ti is faster but it is very close...). And now Polaris 10 has only 20% more cores then the GTX1070 and the GTX1070 has a 32.5% core speed advantage over Polaris 10 (so Polaris 10 has half the core advantage the Fury X had over the 980Ti and gets soundly beat in Core frequency) then it doesn't look very good for Polaris 10 vs. GTX1070... We will see eventually but I certainly wouldn't get my hopes up for Polaris being able to compete with Pascal.
I think I have never seen a 980ti clocked that low in a benchmark... The thing has a boost clock that goes well beyond the 1075mhz it advertises pretty much always near 1200 on the core. But if you think about it Fury X does not have a 45% advantage over a 290x or 390x for that matter. It does not scale like that at all. Because there are other things to consider in the gpu like ROPs, TMUs, Geometry and so forth. I would expect the Polaris 10 to be as fast as a 1070.
https://forums.guru3d.com/data/avatars/m/164/164033.jpg
Ok ... but Nvidia did not reduced IPC compared with Maxwell GTX1070 boost 1,683 MHz 6.45 TFLOPS Cuda cores 1,920 GTX980 boost 1,216 MHz 4.61 TFLOPS Cuda cores 2048 If we set GTX1070 at 1216 Mhz we will have exactly 4.66 TFLOPS . So Pascal is an improvement over Maxwell in the IPC , because we have the same perfomance but with less cores. But the increase in the IPC from Maxwell to Pascal is so litlle , it's not worth even to be mentioned . Shrink Maxwell to 16nm , clock it as high as Pascal and probably you will not see too much differences between them. Unlike Nvidia, AMD told us that Polais have gain noticeably improvements in the IPC , so this time I can expect AMD to be competitive, but only if they can clock their GPU's around 1,4 Ghz
Now that 980 tflops is counted from the 1126 core clock not from the boost clock. So clock for clock 980 will be faster because of the cuda core advantage.
https://forums.guru3d.com/data/avatars/m/224/224564.jpg
Clock speeds mean nothing unless we have a GPU of that architecture for a sample.
data/avatar/default/avatar21.webp
Efficiency matters because in some parts of the world, electricity is expensive. Many places have tiers of electricity usage, if you stray from the "average" range (which isn't hard), you get charged an increasing rate. For example, Tokyo averages around 2X the US' electricity cost per kwh (my monthly bill is ~$100 for an apartment with two). ~5H a day usage, a difference of 50W would average around $20 annual difference, though since the rates increase as you use more electricity, it could be significantly higher. If you use your graphics card for 2-3 years, then that difference adds up to something that is significant. What could seem cheaper upfront actually costs more the longer you use it, negating the short term price advantage. Higher power consuming products are always a bad deal in the long run if priced similarly. Therefore pricing AND efficiency are both important in determining the CP value of the item (obviously assuming performance and feature parity).
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
Considering what they originally showed and also the rumoured 850mhz limit, I'm certain this sample is overclocked.
There was never any reason to believe that ridiculous 850 MHz was real. If you start to believe every rumour in this business, you are finished. The GPU hasn't yet been officially released, not even on paper, so it's irrelevant to speak of overclocks. AMD can't overclock a GPU in development because AMD is the one who decides the base clock of their own GPU.
https://forums.guru3d.com/data/avatars/m/248/248627.jpg
^ wow u have it good then here in Canada where I live it costs us 90$ cad just to be hooked up to the grid that's 90$ we have to pay no matter what we use for electricity.
data/avatar/default/avatar18.webp
GTX 1070 even if it have the same compute power like GTX 980Ti (6.5 vs 6.4 TeraFlops), in games will be slower :has much less memory bandwidth , less rops , less TMUs . So I expect Gtx 1070 to be just between GTX980 and GTX 980Ti , closer to 980Ti . Polaris 10 will be the same between that two Maxwell card, but I suspect will be closer to GTX 980 rather than 980Ti
https://forums.guru3d.com/data/avatars/m/217/217375.jpg
If the SiSoftware score means anything then Polaris 10 scored more than GTX980Ti in SiSoftware bench :3eyes: http://ranker.sisoftware.net/show_run.php?q=c2ffcdf4d2b3d2efdce5d5e6d4e7c1b38ebe98fd98a595b3c0fdc5&l=en
Thanks for the link to compare. Pretty interesting results. Beats the 980Ti decently while having Very different areas of compute performance: Polaris : Score 1674.64Mpix/s Single-float GP Compute 5312.60Mpix/s Double-float GP Compute 527.88Mpix/s Quad-float GP Compute 29.17Mpix/s 980Ti : Score 1256.49Mpix/s Single-float GP Compute 9005.12Mpix/s Double-float GP Compute 175.32Mpix/s Quad-float GP Compute 7.92Mpix/s Really looking forward to seeing how this great looking card performs in games. and with that clock speed, maybe This is going to be the Overclocker's Dream AMD were talking about...? Finally lol
https://forums.guru3d.com/data/avatars/m/252/252414.jpg
Efficiency matters because in some parts of the world, electricity is expensive. Many places have tiers of electricity usage, if you stray from the "average" range (which isn't hard), you get charged an increasing rate. For example, Tokyo averages around 2X the US' electricity cost per kwh (my monthly bill is ~$100 for an apartment with two). ~5H a day usage, a difference of 50W would average around $20 annual difference, though since the rates increase as you use more electricity, it could be significantly higher. If you use your graphics card for 2-3 years, then that difference adds up to something that is significant. What could seem cheaper upfront actually costs more the longer you use it, negating the short term price advantage. Higher power consuming products are always a bad deal in the long run if priced similarly. Therefore pricing AND efficiency are both important in determining the CP value of the item (obviously assuming performance and feature parity).
Dude, you're talking about 700-1000$ videocards. Energy efficiency couldn't mean less in that regard. $50 a year on an electrical bill ... Like someone posted earlier: energy efficiency is there for a reason, I understand that. But it should mean absolutely nothing for home gamers. It can not be a sales argument or a point of interest for that matter, unless the options you're chosing between all perform identical. Energy efficiency is way overrated and blown completely out of preportion. We're talking about a few dozen annual currency. Let it be $100 on a complete system on an annual base if used very heavily. That's $100 vs an average net income of $24K a year, or 0.41% from your salary ... In exchange for all those countless hours of fun ? Ergo: no argument.
https://forums.guru3d.com/data/avatars/m/252/252414.jpg
Thanks for the link to compare.
Cool system specs :P Xeon 1366 FTW ! :banana:
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Dude, you're talking about 700-1000$ videocards. Energy efficiency couldn't mean less in that regard. $50 a year on an electrical bill ... Like someone posted earlier: energy efficiency is there for a reason, I understand that. But it should mean absolutely nothing for home gamers. It can not be a sales argument or a point of interest for that matter, unless the options you're chosing between all perform identical. Energy efficiency is way overrated and blown completely out of preportion. We're talking about a few dozen annual currency. Let it be $100 on a complete system on an annual base if used very heavily. That's $100 vs an average net income of $24K a year, or 0.41% from your salary ... In exchange for all those countless hours of fun ? Ergo: no argument.
In my experience energy efficiency usually only comes into play when nearly everything else is tied. When the 980 came out, for example, it was similar performance to a 290x in most things, had a similar price ($550 vs 500) yet was nearly half the TDP of the 290x, thus TDP became a pretty large factor. It's also a huge selling point to OEMs, where the majority of midrange dGPU sales come from anyway.
Thanks for the link to compare. Pretty interesting results. Beats the 980Ti decently while having Very different areas of compute performance: Polaris : Score 1674.64Mpix/s Single-float GP Compute 5312.60Mpix/s Double-float GP Compute 527.88Mpix/s Quad-float GP Compute 29.17Mpix/s 980Ti : Score 1256.49Mpix/s Single-float GP Compute 9005.12Mpix/s Double-float GP Compute 175.32Mpix/s Quad-float GP Compute 7.92Mpix/s Really looking forward to seeing how this great looking card performs in games. and with that clock speed, maybe This is going to be the Overclocker's Dream AMD were talking about...? Finally lol
Maxwell essentially has no double precision. Maxwell vs 7xxx/2xx looked similar in these results. Nvidia used Kepler to compete on the workstation side of DP.