AMD Polaris 10 Sample Spotted at 1.27 GHz clock frequency

Published by

Click here to post a comment for AMD Polaris 10 Sample Spotted at 1.27 GHz clock frequency on our message forum
https://forums.guru3d.com/data/avatars/m/225/225084.jpg
So it does beat 850mhz then, just.
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
Still a bit on the low side, but certainly looks better and far more believable than the weird 800 or was it 850 MHz on that table from back then. Clock speed does matter, after all. Free speed if the chip can handle it.
https://forums.guru3d.com/data/avatars/m/238/238382.jpg
So it does beat 850mhz then, just.
Slightly\
https://forums.guru3d.com/data/avatars/m/258/258688.jpg
Thinking in terms of MHz is not a very technical, or accurate, observation...;) It's MHz x IPC = Performance. AMD proved that convincingly long ago when the Athlon spanked the original Pentium architecture into an early and well-deserved grave. Raw MHz would only have any meaning for performance if the IPC for the two architectures is equal--and of course it is not. (The only way it could be is if both architectures were the same GPU.)
https://forums.guru3d.com/data/avatars/m/174/174929.jpg
Thinking in terms of MHz is not a very technical, or accurate, observation...;) It's MHz x IPC = Performance. AMD proved that convincingly long ago when the Athlon spanked the original Pentium architecture into an early and well-deserved grave. Raw MHz would only have any meaning for performance if the IPC for the two architectures is equal--and of course it is not. (The only way it could be is if both architectures were the same GPU.)
^^This One number does not 'rule them all'.
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
A Polaris 10 2560 Shader 1.3GHz+ card at $250-300 would kill the 1070.
https://forums.guru3d.com/data/avatars/m/248/248627.jpg
Even at that clock with that amount of shades should be dam close to a gtx 1070
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
Thinking in terms of MHz is not a very technical, or accurate, observation...;) It's MHz x IPC = Performance. AMD proved that convincingly long ago when the Athlon spanked the original Pentium architecture into an early and well-deserved grave. Raw MHz would only have any meaning for performance if the IPC for the two architectures is equal--and of course it is not. (The only way it could be is if both architectures were the same GPU.)
And how do you increase the IPC? With more transistors, more complicated structure, or by sacrificing something else to do one thing better. I'm not a semiconductor engineer, so I don't know which approach is cheaper, but in the end Nvidia is making huge profit and AMD is losing money. Clock speed can't be all wrong. Still, that's not really my point. I just want AMD to do both. Clearly they were in a position to get performance out of higher clocks as well since they were so far behind Nvidia. Different architectures, but they are still both GPUs running the same software even if the drivers in between were different.
data/avatar/default/avatar11.webp
And how do you increase the IPC? With more transistors, more complicated structure, or by sacrificing something else to do one thing better. I'm not a semiconductor engineer, so I don't know which approach is cheaper, but in the end Nvidia is making huge profit and AMD is losing money. Clock speed can't be all wrong. Still, that's not really my point. I just want AMD to do both. Clearly they were in a position to get performance out of higher clocks as well since they were so far behind Nvidia. Different architectures, but they are still both GPUs running the same software even if the drivers in between were different.
AMD's money problem isn't so much in their GPU division as it is in their current CPU division. Nvidia only has to worry about GPU stuff for the most part. As for IPC, look at CPUs from Intel vs AMD. IPC on Intel chips is way higher than AMD chips. It has a lot to do with cache design as well as memory controller speed/throughput, branch prediction, and pipeline. Basically, clockspeed has little to do with overall performance unless you are comparing the same exact architecture or doing research on how fast the clockspeed on one processor has to be in order to match the performance of a different processor at some pre-determined speed. In the past, the Intel Pentium 4 lagged horribly behind the Athlon XP and even more behind the Athlon 64 line in regards to IPC. Then Intel released the Core architecture and the tables turned. So until I see actual performance information, I take clockspeed with a huge grain of salt.
https://forums.guru3d.com/data/avatars/m/63/63215.jpg
I don't trust it tbh. It seems fishy. Why would you clock so low to show it off knowing people will see it? I reckon it's barely stable and they're tweaking the heck out of it to even get through the benchmarks. AMD's architecture is inherently power-hungry as well, despite the node-shrink providing decent reductions in power-usage. I think this is an over-clocked to the max sample that is running closer to 180Watts than 150watts...to beat a GTX980.
https://forums.guru3d.com/data/avatars/m/34/34585.jpg
I don't trust it tbh. It seems fishy. Why would you clock so low to show it off knowing people will see it? I reckon it's barely stable and they're tweaking the heck out of it to even get through the benchmarks. AMD's architecture is inherently power-hungry as well, despite the node-shrink providing decent reductions in power-usage. I think this is an over-clocked to the max sample that is running closer to 180Watts than 150watts...to beat a GTX980.
Same reason with the GTX1080, they reduced the IPC compaired to Maxwell so they could increase clock speeds, much like the AMD FX Processor. Some times the trade off is good sometimes not so much.
I don't trust it tbh. It seems fishy. Why would you clock so low to show it off knowing people will see it? I reckon it's barely stable and they're tweaking the heck out of it to even get through the benchmarks. AMD's architecture is inherently power-hungry as well, despite the node-shrink providing decent reductions in power-usage. I think this is an over-clocked to the max sample that is running closer to 180Watts than 150watts...to beat a GTX980.
It's a completely new architecture how can you make such judgements when you know nothing about it? Heck look at the Fury Nano the size and performance to power spanks the GTX970 Mini http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/70704-itx-battle-r9-nano-vs-gtx-970-mini-2.html
https://forums.guru3d.com/data/avatars/m/225/225084.jpg
Thinking in terms of MHz is not a very technical, or accurate, observation...;) It's MHz x IPC = Performance. AMD proved that convincingly long ago when the Athlon spanked the original Pentium architecture into an early and well-deserved grave. Raw MHz would only have any meaning for performance if the IPC for the two architectures is equal--and of course it is not. (The only way it could be is if both architectures were the same GPU.)
Well i was lead to believe that die shrink = higher clocks lower power usage. Of course we know that clock speed alone is meaningless but history tells us die shrink usually comes with benefits, and those benefits usually include gaining higher clocks. Since we have seen a 16nm part hitting 2Ghz the logical conclusion would be to assume a 14nm part would also reach 1.5+Ghz at least. Unless it's packed so full of components that it is restricted in it's clocks.
data/avatar/default/avatar20.webp
Hmm 1.27Ghz isnt that bad it is an improvement over the r9 290X/390X speeds. But I cant see AMD being able to produce a card that is equal to a 980 ti unless the Polaris technology can produce a 970 style overclocker.
https://forums.guru3d.com/data/avatars/m/63/63215.jpg
Same reason with the GTX1080, they reduced the IPC compaired to Maxwell so they could increase clock speeds, much like the AMD FX Processor. Some times the trade off is good sometimes not so much. It's a completely new architecture how can you make such judgements when you know nothing about it? Heck look at the Fury Nano the size and performance to power spanks the GTX970 Mini http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/70704-itx-battle-r9-nano-vs-gtx-970-mini-2.html
You're also not talking about the Nano price, throttling, noise and higher power-consumption...
https://forums.guru3d.com/data/avatars/m/34/34585.jpg
You're also not talking about the Nano price, throttling, noise and higher power-consumption...
The 970 Mini throttles but regardless the 40 watts extra power and upto double the frame rates well thats no contest yeah a few dba louder and somewhat higher priced but the point was the little extra power it needs over the 970Mini for significantly more performance was what i was getting at.
https://forums.guru3d.com/data/avatars/m/63/63215.jpg
The 970 Mini throttles but regardless the 40 watts extra power and upto double the frame rates well thats no contest yeah a few dba louder and somewhat higher priced but the point was the little extra power it needs over the 970Mini for significantly more performance was what i was getting at.
For the price of a Nano (which is really small Fury X) you'd have to compare to a slightly boosted-clocked GTX980 tbh. The GTX970 mini is only comparable for form factor, not performance or price. http://www.guru3d.com/articles_pages/nvidia_geforce_gtx_1080_review,8.html
https://forums.guru3d.com/data/avatars/m/63/63215.jpg
It's probably a reference sample, so the final chip could be a little faster. It's also probably a base clock, not an overclock. Wasn't the GTX 1070 overclocked to the hilt? No word on the temperatures, stability, what type of cooling used, or whether it was a 'golden' GTX 1070 sample.
Considering what they originally showed and also the rumoured 850mhz limit, I'm certain this sample is overclocked.
data/avatar/default/avatar19.webp
It is amazing what some of you are willing to assume on so little information. As if willing such will make it true. Patience ..............................................
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
It's probably a reference sample, so the final chip could be a little faster. It's also probably a base clock, not an overclock. Wasn't the GTX 1070 overclocked to the hilt? No word on the temperatures, stability, what type of cooling used, or whether it was a 'golden' GTX 1070 sample. If that Nvidia chip was the best possible scenario, it really is a downside for them almost bordering on idiocy. Sure, it might sound impressive but when people actually purchase it they are going to try and gain that same clock speed, which could potentially lead to many fried GPU's. As others have mentioned though, at the end of the day it's the actual performance of the card that matters, not what frequency it can run.
What 1070 overclock? I don't even think it's possible to fry a card without modding the vbios or physically altering it.
A Polaris 10 2560 Shader 1.3GHz+ card at $250-300 would kill the 1070.
Doesn't even need that speed. Just give it 2560 cores and price at $300. It would be $70 cheaper then a 1070, perform identically, can be cross fired for slightly better GTX1080 performance for $100 cheaper. Boom instant money
data/avatar/default/avatar39.webp
So the Polaris has 36 CU's and the 1080 has 20 right? Is that why the clocks can be lower?