AMD Polaris 11 in shows CompuBench has 1024 Shader processors

Published by

Click here to post a comment for AMD Polaris 11 in shows CompuBench has 1024 Shader processors on our message forum
https://forums.guru3d.com/data/avatars/m/260/260826.jpg
I'm very confident that somehow AMD has a "secret sauce" that will be related to Cross Fire, i guess the X2/or duo cards will come in all chips, maybe they figured it out how to make a game thats not cfx enable make use of the 2 cards or are firmily believeing that dx12 will make it work like it should.
Duo will follow the exact same path 295x2 but at least 4 GB VRAM per GPU was enough for 4K 2 years ago...not now. There is no AMD "secret sauce" for DX11 where GPU maker can still do something about multiGPU support. In DX12 all the multiGPU support is in game devs, you can see their "secret source" in Quantum break right now: - No SLI - No CFX - And ofc not "agnostic" multiGPU like the one we see in AMD DX12 showcase: AOTS UWP is not helping but it's not a limiting factor in this aspect: exclusive fullscreen mode is not needed for multiGPU support in DX12 games like it was on DX11.
https://forums.guru3d.com/data/avatars/m/169/169957.jpg
Theoretical floating point performance is proportional to frequency, couldn't care less what those 10 billion extra transistors go towards, 4096 cores at 1050mhz == maximum of 8.6Tflops just like Fiji. They can improve utilization, they can improve performance with low occupancy, theoretical limit remains identical to Fiji. If Vegas only has 4096 cores I expect frequency to be way higher than fiji:s
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
Theoretical floating point performance is proportional to frequency, couldn't care less what those 10 billion extra transistors go towards, 4096 cores at 1050mhz == maximum of 8.6Tflops just like Fiji. They can improve utilization, they can improve performance with low occupancy, theoretical limit remains identical to Fiji. If Vegas only has 4096 cores I expect frequency to be way higher than fiji:s
Most likely ALL the info we have is bollocks. It's not like it's not like this every time.
https://forums.guru3d.com/data/avatars/m/169/169957.jpg
Most likely ALL the info we have is bollocks. It's not like it's not like this every time.
Yeah I'm just confirming that 'frequency doesn't matter' is wrong, it has exactly the same weight as the number of fp32
https://forums.guru3d.com/data/avatars/m/250/250539.jpg
Sweet! Can't wait to see the Benchmarks on these Beast's :banana:
https://forums.guru3d.com/data/avatars/m/227/227853.jpg
I can understand such a feeling, probably it's even ab it more disappointing with the 970 than with my 980 base, because of the performance I get with one card. Tbh, I'm not that much disappointed. It works as good as it gets in the main game I play, BF4, and will do so once BF5 launches, and everything else I play also works fine for now. Yet I have to admit I was hoping for more. More SLI support with engines, more chances to see stacked / combined resources with dx12, more performance than under dx11 in general. I was a bit over ambitious I have to admit, as now with dx11 EVERYTHING related to SLI (and CFX I guess) ends up in the devs hands, and they aren't exactly doing a good job overal. And I don't even want to think about optimizations for SLI as many times I wish even basic optimization would be more of the rule than the exception. Feels like most engines out there simply don't support SLI (and CFX I guess). That alone should tell a buyer not to invest money in it, I just wasn't aware of it. So effectively a SLI system (to some extent, CFX too I guess) loses in performance at twice the rate due to two cards's performance decreasing over the following years and their games, and double so as SLI support seems to be on the losing end. After all I can just say, I mistook the whole situation before I did my purchase. I built my rig to run 144Hz with my current display in BF4 on ultra, and it does exactly that, GPUs not overclocked. With a little headroom I hope to carry that over to BF5, so in the end the rig works like intended. I am just disappointed that the chance to really incorporate all your system's resources at it's most efficient, what I hoped for with dx12, does not seem to be coming around at all. At the moment, we see resources that our system's have (like Fsync / Gsync for instance) being bugged under dx12, games being fps locked, no support for overlays... it seems that with the glorious dx12 we get less and not more. Except if you run AMD hardware, that is, and after 2 ATI rigs and now 2 Nvidia rigs (current one the 2nd), cards are shuffled anew. Quite interesting I'd say, my money's already waiting to be spent 2017.
I feel you, but I do not necessarily think it was a bad purchase. I firmly believe that with DX12, SLI and CFX really have the opportunity to shine. What I worry about is those ****ing game devs which seem to forget how to write code. I don't necessarily refer to DX12 since it's a new API and all that; but even if we exclude the recent messy titles that have been launching recently, we've been getting incrementally worse optimized games in the past few years. Think how badly watch_dogs performed on launched. Or what a mess SoM was. And now we want to give all the optimization power in the hands of the devs? The gap between well and badly optimized games will only widen twofold. I'm not worried about the tech. I'm worried about laziness and cheap ass management.
Duo will follow the exact same path 295x2 but at least 4 GB VRAM per GPU was enough for 4K 2 years ago...not now. There is no AMD "secret sauce" for DX11 where GPU maker can still do something about multiGPU support. In DX12 all the multiGPU support is in game devs, you can see their "secret source" in Quantum break right now: - No SLI - No CFX - And ofc not "agnostic" multiGPU like the one we see in AMD DX12 showcase: AOTS UWP is not helping but it's not a limiting factor in this aspect: exclusive fullscreen mode is not needed for multiGPU support in DX12 games like it was on DX11.
AoTS is not exactly vendor agnostic seeing as it uses heavy amounts of async shading yet hardly touches conservative rasterization which GCN does not support. I'm sure I'll get a lot of flak for this but let's face it.. AoTS has been AMD's pet project since the beginning. Seeing as async shading has been implemented the exact way AMD wanted it to be - they wrote the specification. It caters perfectly to GCN.
https://forums.guru3d.com/data/avatars/m/169/169957.jpg
I feel you, but I do not necessarily think it was a bad purchase. I firmly believe that with DX12, SLI and CFX really have the opportunity to shine. What I worry about is those ****ing game devs which seem to forget how to write code. I don't necessarily refer to DX12 since it's a new API and all that; but even if we exclude the recent messy titles that have been launching recently, we've been getting incrementally worse optimized games in the past few years. Think how badly watch_dogs performed on launched. Or what a mess SoM was. And now we want to give all the optimization power in the hands of the devs? The gap between well and badly optimized games will only widen twofold. I'm not worried about the tech. I'm worried about laziness and cheap ass management. AoTS is not exactly vendor agnostic seeing as it uses heavy amounts of async shading yet hardly touches conservative rasterization which GCN does not support. I'm sure I'll get a lot of flak for this but let's face it.. AoTS has been AMD's pet project since the beginning. Seeing as async shading has been implemented the exact way AMD wanted it to be - they wrote the specification. It caters perfectly to GCN.
Async compute is part of the spec, Async shaders are what amd called concurrent multi-engine
https://forums.guru3d.com/data/avatars/m/260/260826.jpg
AoTS is not exactly vendor agnostic seeing as it uses heavy amounts of async shading yet hardly touches conservative rasterization which GCN does not support. I'm sure I'll get a lot of flak for this but let's face it.. AoTS has been AMD's pet project since the beginning. Seeing as async shading has been implemented the exact way AMD wanted it to be - they wrote the specification. It caters perfectly to GCN.
Nobody can seriously say that you are wrong. 🙂 AOTS is an AMD PR stunt from start to finish: "Look what you can do on DX12 when Nvidia GameWorks is not breaking our balls!". LOL
data/avatar/default/avatar15.webp
huh fury x is already over 8 teraflops... if vega 10 is a new enthusiast replacement then it would been ~16 teraflops..
https://forums.guru3d.com/data/avatars/m/34/34585.jpg
Shader count looks the same so is clock speeds maybe they have higher efficiency, might be like comparing a Intel core I5 to an AMD FX we just don't know lol
https://forums.guru3d.com/data/avatars/m/260/260629.jpg
I believe it Polaris 11 R7 460/460X 128 bits 1024 stream processors 2-4 GB GDDR5 R7 470/470X 256 bits 1.... ??? stream processors 4GB GDDR5 Polaris 10 R9 480/480X 256/384 Bits 2304-2560 stream processors 6-8GB GDDR5/X R9 490/490x 384/512 Bits 3.... stream processors 8-12-16 GB GDDR5/X Vega 10 and 11 in 2017 Serie 500 with GDDR5X and HBM2
https://forums.guru3d.com/data/avatars/m/66/66148.jpg
TBH those stats for Vega look far too close to a Fury X die shrink. Considering AMD's roadmap and other things I think that those stats are either incorrect or just hijacked from a Fury spreadsheet. The stats from a couple of posts above are more believable but even then still a bit odd. Looks like I have to wait a bit longer to find out from the horses mouth what is on the GPU rather than look at the speculation. Also what's the point in releasing Vega with the same performance as Fury X? It's meant to be massively faster so it would be a bit contrary to release it with the same Tflop's as Fury X.