Rumor: Polaris validation Failed Might Launch in October Now

Published by

Click here to post a comment for Rumor: Polaris validation Failed Might Launch in October Now on our message forum
data/avatar/default/avatar23.webp
If what is true? Pascal doesn't have Async Compute. We know that -- that doesn't mean it was rushed. Eh, potentially. They usually get engineering samples from the foundries way before they spin silicon on it though. For example, there was a video recently about Nvidia's chip failure lab (where they detect issues with chips coming back from the foundries to fix them) and the guy mentioned that they were already receiving 10nm test chips from foundries. So I kind of find it hard to believe that AMD didn't know that clock speeds would be an issue, unless they completely skipped that or decided to design around it somehow.
I know AMD really harped on async compute on their pre mature marketing, but is it really as big as deal as people think it is? seems to me it might be just a great marketing term more than anything else. I know amd seems to be doing well in dx12 so far, but I think its driver related and AMD is banking on people switching to red in hopes for future performance... Maybe im jaded, but im not trusting AMD to fulfill promises of anything in the future again.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
nope they didn't nvidia patched their driver so it didn't expose the feature as such game thinks it doesn't support it and doesn't use as async on nvidia Cards but we don't know if it's the same for Pascal. I wonder what is a proper benchmark? Cause thus far nvfan boys have disregard every dx12 game on the market is unreal engine the only ones you consider real benchmarks?
Rise of the Tomb Raider isn't unreal and the Ti wins in that. Quantum Break the Ti loses by 3 fps at stock, but you can overclock almost every single Ti by 35% and it then it just blows past the Fury X. You can tie a Fury X with an overclock in Ashes of Singularity too. Pascal has better pre-emption but no Async afaik.
data/avatar/default/avatar27.webp
All end users wants price/perf they don't care about red or green or whatever is logo of gpu manufacturer, 2nd thing is they care is how good drivers are and support no loyality here, atm Nvidia leads with better gpu yield and drivers but they are more expensive like apple they sell 50% name and rest is hardware and for Amd we will see what they will bring in the future, personaly i just want price< & >perf 🙂
To an extent yes, but once people are happy with a brand they tend to want to stick with it.
data/avatar/default/avatar38.webp
Rise of the Tomb Raider isn't unreal and the Ti wins in that. Quantum Break the Ti loses by 3 fps at stock, but you can overclock almost every single Ti by 35% and it then it just blows past the Fury X. You can tie a Fury X with an overclock in Ashes of Singularity too. Pascal has better pre-emption but no Async afaik.
Oh man, you can't say **** like this... They're gonna get you in your sleep for this. QB results vary hugely depending on the reviewer. The main point is, it runs like ASS at 1080p( with the disgusting upscaling disabled) no matter what card you're using. The game should be relegated to the videogaming latrine RotTR actually really benefited from the move to dx12 it performs far better and without stutters
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
IF this rumor proves to be true, like I already said, it's very, very bad for AMD.
data/avatar/default/avatar04.webp
Oh man, you can't say **** like this... They're gonna get you in your sleep for this. QB results vary hugely depending on the reviewer. The main point is, it runs like ASS at 1080p( with the disgusting upscaling disabled) no matter what card you're using. The game should be relegated to the videogaming latrine RotTR actually really benefited from the move to dx12 it performs far better and without stutters
Quantum Break isnt very good anyway.
https://forums.guru3d.com/data/avatars/m/202/202673.jpg
Polaris 10 is using Globalfoundries Samsung-licensed 14nm node, Polaris is also on TSMC's 16nm node like Pascal.
The group has confirmed that they will be utilizing both traditional partner TSMC’s 16nm process and AMD fab spin-off (and Samsung licensee) GlobalFoundries’ 14nm process, making this the first time that AMD’s graphics group has used more than a single fab.
So...errrr...GF having problems with a new node again? little edit...apparently Polaris is from both...can't imagine both GF and TSMC struggling, though
https://forums.guru3d.com/data/avatars/m/258/258688.jpg
To an extent yes, but once people are happy with a brand they tend to want to stick with it.
Main thing I have against nVidia is that I don't trust the company. There are so many examples in the past of how the company fibs in its marketing specs--just like they did with Maxwell and d3d12--you can't be "12_1" compliant without Async Compute hardware. First they said they'd "turn it on" (lol--they've done that more than once about more than one feature) and then they just stopped talking about it altogether--but they are still pushing f-r-a-u-d-ulent d3d specs for Maxwell even now. They hope the customer will be too stupid to notice--and sadly, he often is. I often don't believe anything they say, more or less. I got a bellyful of nVidia in the late 90's & early 00's. You should have seen how nVidia fought tooth & nail against 3dfx's introduction of FSAA in 3d gaming--just because nVidia couldn't match it. Never forget that stuff. Ever.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
I know AMD really harped on async compute on their pre mature marketing, but is it really as big as deal as people think it is? seems to me it might be just a great marketing term more than anything else. I know amd seems to be doing well in dx12 so far, but I think its driver related and AMD is banking on people switching to red in hopes for future performance... Maybe im jaded, but im not trusting AMD to fulfill promises of anything in the future again.
Async essentially fills in the gaps of underutilized shaders. If the shaders are going underutilized Async will automagically shift in compute code to run simultaneously, effectively utilizing 100% of the pipeline as much as possible. It's kind of similar to SMT for CPUs. Regardless, in most titles it only shows a 4-6% gain in performance. In rare cases it can show far more, up to 20% for example in Ashes 4K benchmark on a Fury X. But in reality all that tells me is that the Fury X is being heavily underutilized at 4K in terms of graphics commands. Probably because its limited to 64 rops. It basically boots pipeline efficiency by injecting compute code into the pipeline when it's not functioning at 100%. Which is good, but it comes at the cost of increased chip complexity, die size and power consumption. I'm not sure if those trades are worth a 4-6% increase in performance. That being said, we have no idea how engine development is going to change going forward and if there will be a greater emphasis on compute. At least this is how I understand it.
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
How can this be true when the PS4 NEO dev kits have Polaris GPUs at 911MHz? Also the there is another news item that says that not only Polaris is in time, but Vega is 5 months early 🤓
data/avatar/default/avatar27.webp
Main thing I have against nVidia is that I don't trust the company. There are so many examples in the past of how the company fibs in its marketing specs--just like they did with Maxwell and d3d12--you can't be "12_1" compliant without Async Compute hardware. First they said they'd "turn it on" (lol--they've done that more than once about more than one feature) and then they just stopped talking about it altogether--but they are still pushing *****ulent d3d specs for Maxwell even now. They hope the customer will be too stupid to notice--and sadly, he often is. I often don't believe anything they say, more or less. I got a bellyful of nVidia in the late 90's & early 00's. You should have seen how nVidia fought tooth & nail against 3dfx's introduction of FSAA in 3d gaming--just because nVidia couldn't match it. Never forget that stuff. Ever.
Oh boy... You can, absolutely, be D3D12_1 compliant without 'async compute' 'async compute' is actually just concurrent execution of graphics+compute, it is a not a requirement, it's simply a feature that is enabled by the new API. AMD calls it Async Shaders
Async essentially fills in the gaps of underutilized shaders. If the shaders are going underutilized Async will automagically shift in compute code to run simultaneously, effectively utilizing 100% of the pipeline as much as possible. It's kind of similar to SMT for CPUs. Regardless, in most titles it only shows a 4-6% gain in performance. In rare cases it can show far more, up to 20% for example in Ashes 4K benchmark on a Fury X. But in reality all that tells me is that the Fury X is being heavily underutilized at 4K in terms of graphics commands. Probably because its limited to 64 rops. It basically boots pipeline efficiency by injecting compute code into the pipeline when it's not functioning at 100%. Which is good, but it comes at the cost of increased chip complexity, die size and power consumption. I'm not sure if those trades are worth a 4-6% increase in performance. That being said, we have no idea how engine development is going to change going forward and if there will be a greater emphasis on compute. At least this is how I understand it.
Where you getting Fury X 20% gain at 4k ?
How can this be true when the PS4 NEO dev kits have Polaris GPUs at 911MHz? Also the there is another news item that says that not only Polaris is in time, but Vega is 5 months early 🤓
yeah that was a mistake, it was just a rumor psoted by a random guy on a german forum... It was actually Polaris being late
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Where you getting Fury X 20% gain at 4k ?
Anandtech http://images.anandtech.com/graphs/graph10067/80354.png To me though if Async is making a 20% performance gain that means that the shaders are being under utilized for graphics. Which makes no sense as at a higher resolution it should be increased. The only thing I can think of is that the 64 Rops limit Fury X's 4K performance i n certain situations. There is where Async comes in and can fill those gaps and boost it in other ways.
https://forums.guru3d.com/data/avatars/m/267/267531.jpg
Anandtech To me though if Async is making a 20% performance gain that means that the shaders are being under utilized for graphics. Which makes no sense as at a higher resolution it should be increased. The only thing I can think of is that the 64 Rops limit Fury X's 4K performance i n certain situations. There is where Async comes in and can fill those gaps and boost it in other ways.
That was the beta, the release version presents very different results Also, about the ROPs it's possible, + combined with 4x the per-pixel load compared to 1080p that should really saturate the shader array
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
Oh boy... You can, absolutely, be D3D12_1 compliant without 'async compute' 'async compute' is actually just concurrent execution of graphics+compute, it is a not a requirement, it's simply a feature that is enabled by the new API. AMD calls it Async Shaders Where you getting Fury X 20% gain at 4k ? yeah that was a mistake, it was just a rumor psoted by a random guy on a german forum... It was actually Polaris being late
Why is this true? AMD has been showing Polaris silicon since January.
https://forums.guru3d.com/data/avatars/m/267/267531.jpg
Why is this true? AMD has been showing Polaris silicon since January.
Well it's not like they showed you the clocks, they may well have had working ES silicon (at low clocks) since January but for some reason they are only now going for a new stepping A1 . It's not certainly true obviously, but it lines up with recent reports we got from AIBs This is more true than Vega releasing in October, which was literally a random guy on a German forum who made that statement before it was parroted by everyone and their dog. That guy on a German forum actually misunderstood a post about Polaris being delayed to october and thought it was Vega It matches with AIB reports that they have nothing for Polaris @ Computex. The guy who originated this rumor actually has a track record of accurate leaks
https://forums.guru3d.com/data/avatars/m/250/250552.jpg
So much bull**** people are willing to believe...incredible...
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Polaris 10 is using Globalfoundries Samsung-licensed 14nm node, Polaris is also on TSMC's 16nm node like Pascal. So...errrr...GF having problems with a new node again? little edit...apparently Polaris is from both...can't imagine both GF and TSMC struggling, though
I think Polaris 10/11 are both 14nm GF. I think the GPU's going on Zen, based on Polaris architecture, are being developed on 16nm TSMC, along with the entire Zen processor.
https://forums.guru3d.com/data/avatars/m/267/267531.jpg
So much bull**** people are willing to believe...incredible...
Well can you explain why AMD have been ****ting on their own parade, talking about Polaris left right and center, painting it is as the second coming of christ.... Since january and not a peep heard from them since. They had a similar problem with R600 series... Took them two quarters to find the problem in their cell libraries TWO QUARTERS
https://forums.guru3d.com/data/avatars/m/267/267531.jpg
I think Polaris 10/11 are both 14nm GF. I think the GPU's going on Zen, based on Polaris architecture, are being developed on 16nm TSMC, along with the entire Zen processor.
I've been hearing many people tell me Polaris is also TSMC. I find that very weird, and dangerous. AMD has a bad track record with their libs, if I'm not mistaken they claimed they were licensing custom libs for Polaris... If they are having trouble clocking on GF as the rumor claims, I can't imagine splitting their work over different libs for TSMC/GF is a good idea lol God I really hope this isn't true... It would be such a damned loss for everyone if Polaris is DOA
https://forums.guru3d.com/data/avatars/m/212/212018.jpg
So... this guy rly made a second account ? just lol :bang: