New AMD roadmap gives more insight in polaris 10 and 11

Published by

Click here to post a comment for New AMD roadmap gives more insight in polaris 10 and 11 on our message forum
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
DP 1.3 => 3840x2160 @ 120Hz = 1920x1080 @ 480Hz.
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
Does look pretty powerful. But just reading Async Compute on that slide makes me cringe...
data/avatar/default/avatar21.webp
So the new Enthusiast cards (including a jump from 28 to 14nm) can do 8,2 Tflops while the current cards can do 8,6 Tflops?? (Fury) What did i miss?
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
I am still confused about the core clocks. I mean, the green team manages 1,5 GHz now (yes, in cherry picked chips). Why are we still arround the 1GHz mark at the red team?
Well, reaching 1500 with Maxwell with overclocks wasn't that difficult for most, no actual cherry picking needed. But my simple guess for why they don't really surpass 1GHz, they might not need the performance. 🙂
https://forums.guru3d.com/data/avatars/m/209/209146.jpg
So the new Enthusiast cards (including a jump from 28 to 14nm) can do 8,2 Tflops while the current cards can do 8,6 Tflops?? (Fury) What did i miss?
I was under the impression that "Polaris 11" was a bit of a low-end GPU and "Polaris 10" mid-range, "Vega 10" coming in 2017 should be the enthusiast model though the 14nm die shrink might still be able to give some nice performance gains for Polaris 10/11 GPU's even if they're not quite as "equipped" as the current Fury models are. (No idea how it looks like for Nvidia, X70 is apparently the initial low/mid-end model with X80 a bit above that and then possibly a X80Ti later as a high/enthusiast-end model but I have no idea besides what some rumors hinted at.)
https://forums.guru3d.com/data/avatars/m/164/164033.jpg
Hmmh I guess I will wait for Vega then for sure 🙂
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
I was under the impression that "Polaris 11" was a bit of a low-end GPU and "Polaris 10" mid-range, "Vega 10" coming in 2017 should be the enthusiast model though the 14nm die shrink might still be able to give some nice performance gains for Polaris 10/11 GPU's even if they're not quite as "equipped" as the current Fury models are. (No idea how it looks like for Nvidia, X70 is apparently the initial low/mid-end model with X80 a bit above that and then possibly a X80Ti later as a high/enthusiast-end model but I have no idea besides what some rumors hinted at.)
I think this tells entire story of this particular step from 28 to 14/16nm. https://devblogs.nvidia.com/parallelforall/inside-pascal/ nVidia built GP100 (610mm^2) as bigger chip than GM200 (601mm^). On 28 nm, they reached with GM200 250W limit with 8 billion transistors ticking at 948/1114MHz base/boost while equipped with power hungry GDDR5. on 16nm, they reached with GP100 300W limit with 15.3 billion transistors ticking at 1328/1480MHz base/boost while equipped with power efficient HBM2. In other words TDP was limit, not achievable clock. And only other reason for lower clock than TDP is that GPU makes errors in calculation on higher clocks. (unacceptable for business grade HW) And that may be for consumer cards much higher. I guess many people here OC their cards by rule of: "I see artifact, so -20MHz and keep it there" But many of those high OC end up failing in compute tests like GPUPI. Taking in account power efficiency: 1.9125 more transistors + 40% increased base clock/33% increased turbo clock. And increased total TDP of GPU (around 210 vs 270W). 16nm even on described condition delivers: 2.03 times higher (transistor and clock) to power consumption ratio. So taking GPU like gtx 980 has, putting it on 16nm it will eat around 1/2 of original power. How high OC can we expect before GPU power consumption matches last generation? 40%? 60%? Same goes for those announced r9-480(x) chips, unless AMD breaks their design in some way, they'll clock them much higher than those (guesstimated 1GHz what floats around net). And if they were not able to clock them higher, power consumption would be ridiculously low. Basically that r9-480x with 2560SP, 160TMU, 64ROPs with proclaimed 800MHz base would fall into sub 100W notebook category. (More like under 80W) Because Nano is 175W TDP card which ticks around 950~1000MHz depending on airflow and has more transistors than r9-480x.
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
I am still confused about the core clocks. I mean, the green team manages 1,5 GHz now (yes, in cherry picked chips). Why are we still arround the 1GHz mark at the red team?
Yeah. There's no way that ~800MHz is for real. If it is, it's a failure of a chip design.
data/avatar/default/avatar36.webp
I am still confused about the core clocks. I mean, the green team manages 1,5 GHz now (yes, in cherry picked chips). Why are we still arround the 1GHz mark at the red team?
cmiiw but from what i know, the logic when designing a chip is not how to make it run at higher speed (more ghz is not always means better) but instead how to run more efficiently if i can use cpu as example, pentium4@4ghz vs skylake@2ghz... skylake should win because it have more cores and intruction per cycle anyway what matter is real performance ... they can wrote spec like double or even triple from current lineup, but if performance increase only like 10% up then its means nothing
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
cmiiw but from what i know, the logic when designing a chip is not how to make it run at higher speed (more ghz is not always means better) but instead how to run more efficiently if i can use cpu as example, pentium4@4ghz vs skylake@2ghz... skylake should win because it have more cores and intruction per cycle anyway what matter is real performance ... they can wrote spec like double or even triple from current lineup, but if performance increase only like 10% up then its means nothing
And that's the thing. While GCN is more efficient than Maxwell if we take: performance / (transistor*clock) Maxwell clocks higher ultimately allowing smaller (cheaper to make chip) perform competitively or even better than AMD competitor. As people are used to 150~250W cards, nVidia will target this range. And if power efficiency from 16nm allows them to clock above 1.5GHz... Then they can deliver adequate performance through use of higher clock on smaller (cheaper to make) chip. But if AMD can't clock GCN that high even on 14nm, then you get your power efficient chips which will require more transistors to compete with pascal's higher clock. (GCN will be more expensive to make.) And that either means much higher profits for nVidia as AMD can't undercut them or very low profits for AMD. Either way, as you get more than double power efficiency and PCIe standard says 300W top, then we should expect cards to be clocked as high as TDP/chip design allows.
https://forums.guru3d.com/data/avatars/m/217/217375.jpg
I was also expecting much higher clock speeds this time out. Perhaps AMD want to make good on that Overclockers Dream thing they mentioned a while back lol
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
I was also expecting much higher clock speeds this time out. Perhaps AMD want to make good on that Overclockers Dream thing they mentioned a while back lol
Both companies should just release a card at 1mhz. "THE BEST OVERCLOCKER EVER!!!11!1" Fox2232, you think it's an architecture design choice by AMD, or a limit of the 14nm node?
https://forums.guru3d.com/data/avatars/m/54/54823.jpg
That core clock has to be so it can fit in SFFs like the X51, AMD quantum thinget etc... Kinda annoyed that Pascal 104 will surpass Polaris 10, but whatever.
data/avatar/default/avatar22.webp
That core clock has to be so it can fit in SFFs like the X51, AMD quantum thinget etc... Kinda annoyed that Pascal 104 will surpass Polaris 10, but whatever.
not certain, but yeah... it proly will, hopefully not by much AMD can still win if they have truly competitive product. Small dies and perf/mm2 has always been their bread and butter. One thing they clearly did good this gen is Polaris 11 and focus on mobile graphics. JChrist... finally! GCN/7970 (paper) launch, with no mobile in sight was a disaster. But Nvidia now late with mobile? Whats going on? Waiting on.. Intel, or what..?
data/avatar/default/avatar20.webp
And that's the thing. While GCN is more efficient than Maxwell if we take: performance / (transistor*clock) Maxwell clocks higher ultimately allowing smaller (cheaper to make chip) perform competitively or even better than AMD competitor.
and that's why performance / (transistor*clock) is... a nonsense metric 🤓 look, performance is tangible metric, and so is transistor# clock.. no one gives a **** about clock, because its significance is already accounted for in performance what performance / (transistor*clock) metric does is... it punishes high-clocking arch/design. makes no sense.
data/avatar/default/avatar33.webp
DP 1.3 => 3840x2160 @ 120Hz = 1920x1080 @ 480Hz.
I am so ready to buy if I can find a good 4K TV to go with it! Or should wait for Vega 10?! Sigh...
https://forums.guru3d.com/data/avatars/m/258/258688.jpg
I am still confused about the core clocks. I mean, the green team manages 1,5 GHz now (yes, in cherry picked chips). Why are we still arround the 1GHz mark at the red team?
It's interesting to see that even today people confuse MHz with performance...;)
https://forums.guru3d.com/data/avatars/m/216/216490.jpg
It's interesting to see that even today people confuse MHz with performance...;)
The infamous "more is better" assumption. 😉
https://forums.guru3d.com/data/avatars/m/169/169957.jpg
Both companies should just release a card at 1mhz. "THE BEST OVERCLOCKER EVER!!!11!1" Fox2232, you think it's an architecture design choice by AMD, or a limit of the 14nm node?
At 28nm it was most certainly an architectural limitation, we know both AMD and NV had their chips produced on TSMC 28nm, so transistor performance would have been identical
https://forums.guru3d.com/data/avatars/m/197/197287.jpg
I am still confused about the core clocks. I mean, the green team manages 1,5 GHz now (yes, in cherry picked chips). Why are we still arround the 1GHz mark at the red team?
Considering the fact that the Mhz don't matter and only the architecture, who cares? I mean, your question is as valid as saying "back in the day": "If a Pentium 4 can reach 3.73Ghz, why can't an Athlon 64?" As though the fact that the Pentium 4 3.73Ghz stock actually somehow made it better then the Athlon 64 The point i'm trying to make is, there's no "Mhz" that's created equal, the "mhz" ultimately means nothing other then the speed at which the architecture is running, it only matters what the performance is, not what the Mhz is
Yeah. There's no way that ~800MHz is for real. If it is, it's a failure of a chip design.
^ Above reply to you too, since you seem to think that 800Mhz somehow means bad performance...not saying it WILL be good performance, but unless you know exactly how the new architecture(or updated) performs, then you can't make that call.
It's interesting to see that even today people confuse MHz with performance...;)
Right? I don't understand how people keep thinking this is how it works...