AMD Polaris 11 in shows CompuBench has 1024 Shader processors

Published by

Click here to post a comment for AMD Polaris 11 in shows CompuBench has 1024 Shader processors on our message forum
data/avatar/default/avatar16.webp
Interesting! So close to unveiling of next generation cards! 🙂
https://forums.guru3d.com/data/avatars/m/248/248994.jpg
Aren't those clock speeds too low? Nvidia has long been able to beat or compete with AMD's GPUs despite using less transistors (that is, cheaper chips) by having higher clocks. It seems strange to me AMD wouldn't use this opportunity to get some power for free doing the same. Unless I remember completely wrong, I seem to recall reading that shifting to 14/16 nm would allow increasing the speed.
https://forums.guru3d.com/data/avatars/m/164/164033.jpg
Aren't those clock speeds too low? Nvidia has long been able to beat or compete with AMD's GPUs despite using less transistors (that is, cheaper chips) by having higher clocks. It seems strange to me AMD wouldn't use this opportunity to get some power for free doing the same. Unless I remember completely wrong, I seem to recall reading that shifting to 14/16 nm would allow increasing the speed.
Well they might be going for lower clocks. Who knows how high these might overclock if true tho.
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
I wonder if 16GB for a single GPU are true (I know they can do it, but do they really need it?)
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
It has been benchmarked with some very specially old driver... As it states CL_Driver_Version: 1956.4 (well, it at least matches time of test) We have 2004.6 Then it has 16 compute units and only 1 tenth of Fiji performance (64CU). It has basically parameters of HD 7850, but is still 3 times slower? Why would someone test it with old driver? Likely it is modded driver/vBIOS on HD 7850 and downclocked. If I did same thing with mine, I can call it Vega Engineering Sample. I can set default vBIOS max clock as 600MHz, but OC it to standard 1050 or above. And world will believe that 600MHz Vega performs as well as 1050MHz Fiji. Internet will flop over it at least twice.
https://forums.guru3d.com/data/avatars/m/198/198862.jpg
I wonder if 16GB for a single GPU are true (I know they can do it, but do they really need it?)
I though 8GB HBM2 would be enough. Its up to 16GB so maybe we'll see 8-16GB versions.
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
I though 8GB HBM2 would be enough. Its up to 16GB so maybe we'll see 8-16GB versions.
Yes I was wondering the same. I know they can, but tbh it feels like a bit of overkill. Wouldn't expect that from AMD, but maybe they have too many of those HBM2 modules? 😀
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
I do not think Polaris 10/11 will have HBM. No reason for higher complexity if you can do with 256bit bus. But I hope AMD will really bring low end parts 1st, this time around. They sell more. And it will allow AMD to learn better 14/16nm. +each passing month means TSMC/GloFo improves those manufacturing processes a bit. And on other hand, having released Top Dog cards 1st means that developers will not know performance of Low End and many more games will run badly.
https://forums.guru3d.com/data/avatars/m/227/227853.jpg
Yes I was wondering the same. I know they can, but tbh it feels like a bit of overkill. Wouldn't expect that from AMD, but maybe they have too many of those HBM2 modules? 😀
If CFX/SLI with stackable VRAM due to DX12/Vulkan will actually scale well, I honestly don't really see them doing this. Seeing as atm 4GB seems just about enough, and 8GB sounds future-proof. Although 8-16GB VRAM as standard would encourage devs to use amazing textures. Pretty much the main reason I've been pissed off so much with what's going on with game graphics until recently. Think back 2012. I couldn't see a single bloody reason to have all those shiny and sh!tty shaders instead of better texture quality. Don't even let me get started on those games with ridiculous amount of post-processing effects aimed to hide the crappy texture quality. It would probably be easier to just make better quality textures. The best examples would be high-res texture mods for Skyrim or Mass Effect. Those things look absolutely beautiful. Sometimes I think the GPU industry is ran by morons who don't actually know what they're doing. My opinion as a hobbyist in 3D modeling and rendering: Texture quality and global illumination should be the main focus of improving graphics. We are finally nailing texture quality. Global illumination and other ray tracing effects must follow.
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
If CFX/SLI with stackable VRAM due to DX12/Vulkan will actually scale well, I honestly don't really see them doing this. Seeing as atm 4GB seems just about enough, and 8GB sounds future-proof. Although 8-16GB VRAM as standard would encourage devs to use amazing textures. Pretty much the main reason I've been pissed off so much with what's going on with game graphics until recently. Think back 2012. I couldn't see a single bloody reason to have all those shiny and sh!tty shaders instead of better texture quality. Don't even let me get started on those games with ridiculous amount of post-processing effects aimed to hide the crappy texture quality. It would probably be easier to just make better quality textures. The best examples would be high-res texture mods for Skyrim or Mass Effect. Those things look absolutely beautiful. Sometimes I think the GPU industry is ran by morons who don't actually know what they're doing. My opinion as a hobbyist in 3D modeling and rendering: Texture quality and global illumination should be the main focus of improving graphics. We are finally nailing texture quality. Global illumination and other ray tracing effects must follow.
I have given up hope on anything CFX / SLI related improving... yet again, I don't see 2016's top release from AMD needs 16GB of HBM2 vram. It simply wouldn't benefit form it I think, not even at 4K. Also, as HBM2 supply won't be that abundant (at least I think it won't be), it wouldn't make much sense to put more than enough on a single card. Also makes it more costly etc. Yeah, Skyrim was the first time I seriously saw what mod textures can improve over stock. I'm not even sure every game is shipped with textures that fit 4K resolutions today...
https://forums.guru3d.com/data/avatars/m/165/165326.jpg
I have given up hope on anything CFX / SLI related improving...
Same here i promise myself never ever again to go SLI or CFX as they are going the way the dodo bird went ... I'm still waiting for a single VGA to handle 4k at 60fps in order for me to upgrade my display and VGA at once but i don't see it happening anytime soon sadly 😏, nevertheless let's see what this new cards will bring to the table.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Same here i promise myself never ever again to go SLI or CFX as they are going the way the dodo bird went ... I'm still waiting for a single VGA to handle 4k at 60fps in order for me to upgrade my display and VGA at once but i don't see it happening anytime soon sadly 😏, nevertheless let's see what this new cards will bring to the table.
Same here, for both of those things. Even if multi-GPU ends up improving, I still probably won't go for it until there is decent Linux support too. I also don't intend to upgrade my current GPU or display until I can play something at 4k smoothly.
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
I somehow can't believe that any of those specs are true. Either the shader processors are too few, or the clock speeds to low, or both. If they are true, AMD is in deep trouble, but they seem very calmly confident about the whole thing which is leading me to believe that all the rumored specs are probably wrong.
https://forums.guru3d.com/data/avatars/m/227/227853.jpg
I have given up hope on anything CFX / SLI related improving...
Totally agree. And when I think that I was so close to pulling the trigger on another 970. Boy would I have been disappointed.
I somehow can't believe that any of those specs are true. Either the shader processors are too few, or the clock speeds to low, or both. If they are true, AMD is in deep trouble, but they seem very calmly confident about the whole thing which is leading me to believe that all the rumored specs are probably wrong.
That's exactly my first reaction when I saw the specs. 1024 stream processors and 128-bit memory bandwidth is very low. It sounds like an entry level card, not midrange.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
I somehow can't believe that any of those specs are true. Either the shader processors are too few, or the clock speeds to low, or both. If they are true, AMD is in deep trouble, but they seem very calmly confident about the whole thing which is leading me to believe that all the rumored specs are probably wrong.
The clocks are probably wrong which makes the tflop output wrong. In order to get Hitman @ 60Fps @ QHD they need at least 390x levels of performance which is about 6Tflops. We know that the GP100 is clocked at ~1300 so I would imagine that AMD can hit around that as well. At 1300Mhz, the Polaris 10 listed there would hit 6Tflops. I think it's the clocks because if it had too many more shaders the chip would be too big for Vega to even reasonably exist. Plus it just seemed low to me when I first saw the leak.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
I somehow can't believe that any of those specs are true. Either the shader processors are too few, or the clock speeds to low, or both. If they are true, AMD is in deep trouble, but they seem very calmly confident about the whole thing which is leading me to believe that all the rumored specs are probably wrong.
Are we living in the late 90s again? You should very well know that frequency has very little to do with the overall performance, especially in GPUs. Think of it like this: * There are more GFLOPS compared to Fiji (suggesting there is an improvement) * According to other sources, Vega will have nearly double the transistor count of Fiji. * We know nothing about the pipelines. * There's a die shrink. * Better memory will be used. * Etc I am a little suspicious of the shader processors, but maybe AMD found a way to improve performance without needing as many.
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
Are we living in the late 90s again? You should very well know that frequency has very little to do with the overall performance, especially in GPUs. Think of it like this: * There are more GFLOPS compared to Fiji (suggesting there is an improvement) * According to other sources, Vega will have nearly double the transistor count of Fiji. * We know nothing about the pipelines. * There's a die shrink. * Better memory will be used. * Etc I am a little suspicious of the shader processors, but maybe AMD found a way to improve performance without needing as many.
Of course frequency matters. You can have the most efficient machine ever and if the RPM is low enough everything else will surpass it. Unless GCN has changed so radically, these "shaders" are comparable to Fiji. Having 40% less of them and running them at a slower speed than Fiji, will result in lower overall performance. GFLOPs are not the only thing that matters? What about graphics operations, how well do those 64ROPs will work at 1GHz compared to 2GHz? The specs also show GDDR5. The major effects of die shrinks are mostly a combination of cramming in more hardware and clocking it higher. With these specs none of these two requirements happen.
https://forums.guru3d.com/data/avatars/m/246/246171.jpg
Of course frequency matters. You can have the most efficient machine ever and if the RPM is low enough everything else will surpass it. Unless GCN has changed so radically, these "shaders" are comparable to Fiji. Having 40% less of them and running them at a slower speed than Fiji, will result in lower overall performance.
Yes, frequency matters, but not as much as you're making it seem. As far as I'm concerned, GCN has changed radically. Fiji was 8.9 billion transistors and Vega is supposed to be 18 billion. That is very significant. We don't know where the transistors are going toward, but, maybe they have something to do with the shaders (which results in not needing as much of them or at a higher speed). If the pipelines are long enough, who knows, maybe there's a form of hyper threading involved, but we're just not told that it happens.
GFLOPs are not the only thing that matters? What about graphics operations, how well do those 64ROPs will work at 1GHz compared to 2GHz? The specs also show GDDR5. The major effects of die shrinks are mostly a combination of cramming in more hardware and clocking it higher. With these specs none of these two requirements happen.
In the case of a GPU, shouldn't there be a direct correlation of GFLOPS to graphics operations? Maybe I'm wrong, but think about it - you're concerned about something that, in AMD's perspective makes no sense. Why would they intentionally make something worse? How likely is it to be a problem if everything else is increased/improved? Anyway, I never said or implied that only one of those hardware changes are enough to make up for a smaller frequency. But collectively, they do matter. The point is, just because something seems suspiciously small, that doesn't mean it's worse. And sure, maybe there's a typo in these numbers, but personally, I don't find the specs all that unreasonable.
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
Totally agree. And when I think that I was so close to pulling the trigger on another 970. Boy would I have been disappointed.
I can understand such a feeling, probably it's even ab it more disappointing with the 970 than with my 980 base, because of the performance I get with one card. Tbh, I'm not that much disappointed. It works as good as it gets in the main game I play, BF4, and will do so once BF5 launches, and everything else I play also works fine for now. Yet I have to admit I was hoping for more. More SLI support with engines, more chances to see stacked / combined resources with dx12, more performance than under dx11 in general. I was a bit over ambitious I have to admit, as now with dx11 EVERYTHING related to SLI (and CFX I guess) ends up in the devs hands, and they aren't exactly doing a good job overal. And I don't even want to think about optimizations for SLI as many times I wish even basic optimization would be more of the rule than the exception. Feels like most engines out there simply don't support SLI (and CFX I guess). That alone should tell a buyer not to invest money in it, I just wasn't aware of it. So effectively a SLI system (to some extent, CFX too I guess) loses in performance at twice the rate due to two cards's performance decreasing over the following years and their games, and double so as SLI support seems to be on the losing end. After all I can just say, I mistook the whole situation before I did my purchase. I built my rig to run 144Hz with my current display in BF4 on ultra, and it does exactly that, GPUs not overclocked. With a little headroom I hope to carry that over to BF5, so in the end the rig works like intended. I am just disappointed that the chance to really incorporate all your system's resources at it's most efficient, what I hoped for with dx12, does not seem to be coming around at all. At the moment, we see resources that our system's have (like Fsync / Gsync for instance) being bugged under dx12, games being fps locked, no support for overlays... it seems that with the glorious dx12 we get less and not more. Except if you run AMD hardware, that is, and after 2 ATI rigs and now 2 Nvidia rigs (current one the 2nd), cards are shuffled anew. Quite interesting I'd say, my money's already waiting to be spent 2017.
data/avatar/default/avatar14.webp
I'm very confident that somehow AMD has a "secret sauce" that will be related to Cross Fire, i guess the X2/or duo cards will come in all chips, maybe they figured it out how to make a game thats not cfx enable make use of the 2 cards or are firmily believeing that dx12 will make it work like it should.