NVIDIA Pascal GP104 Die Photo

Published by

Click here to post a comment for NVIDIA Pascal GP104 Die Photo on our message forum
https://forums.guru3d.com/data/avatars/m/264/264923.jpg
warming up the wallets, time for some new gpus peeps.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
Not for this small guy. Big things are needed... Vega?
https://forums.guru3d.com/data/avatars/m/198/198862.jpg
So no more GTX brand for nvidia after all this years, now its just X.
https://forums.guru3d.com/data/avatars/m/265/265607.jpg
Seems quite underwhelming, not even GDDR5X memory.
https://forums.guru3d.com/data/avatars/m/265/265660.jpg
I'm not excited with this. Seems way underwhelming. Sure it will be faster than 9xx series but not that much. And since HDM and DDR5X are not ready yet for the mainstream we have to wait until the next generation to see a good performance jump. They need to push 4K gaming with a single card to the mainstream.
https://forums.guru3d.com/data/avatars/m/180/180081.jpg
Hm, going to be interesting to see what this turns out to be. Seems a bit underwhelming?
https://forums.guru3d.com/data/avatars/m/227/227994.jpg
Can't come soon enough, my 680 served me well for 4 years.
https://forums.guru3d.com/data/avatars/m/227/227853.jpg
I am disappointed if we would not see GDDR5X but at the same time I'm questioning if in the end it's actually needed. Even an overclocked-to-hell 980Ti (only core OC) won't get bandwidth starved any time soon. Let's be serious, the hype around HBM was retarded - I cannot describe it any other way. Crap like '4GB HBM = 8GB GDDR5' or 'HBM will make 4k possible on one single card'. What the fek seriously. Sounds like the same people who say that DDR4 made their PCs much faster (no IGP). I haven't seen conclusive proof that HBM actually helps the Fury X to a tangible level. If anyone has such proof, please share it with me.
I said it a few days ago, and still believe this. I think Polaris is going to come out on top at all price points this round. I think people are going to be surprised in the next couple months with what AMD will have to offer.
What makes you say that? Genuinely curious. I believe it all depends on how they implement DX12 features. Atm AMD have an advantage in DX12 because everybody seems happy to jump to async shading but we haven't touched conservative rasterization yet, something GCN does not support. If these guys (AMD/Nvidia) don't start supporting conservative raster and async shading respectively, we will see a huge ****fest similar to what's going on right now DX12-wise. Devs will have to side with one party and ditch the other. Occasionally we might have the amazing dev which will actually take the time to optimize properly for both parties (for example coding volumetric lighting to take advantage of either conservative raster or async shading at will). Like I said. Sh!tfest.
data/avatar/default/avatar01.webp
I'd say this is just a filler to make some sales. Not what we're waiting for. (HBM2, 15B transistors)
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
If these guys (AMD/Nvidia) don't start supporting conservative raster and async shading respectively, we will see a huge ****fest similar to what's going on right now DX12-wise. Devs will have to side with one party and ditch the other. Occasionally we might have the amazing dev which will actually take the time to optimize properly for both parties (for example coding volumetric lighting to take advantage of either conservative raster or async shading at will). Like I said. Sh!tfest.
I tend to agree.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
I mean it basically lines up with what Polaris 10 is. We aren't getting consumer level 600mm2 cards first and we've known that for over a year. I'm not sure why people are surprised by the specs.
https://forums.guru3d.com/data/avatars/m/164/164033.jpg
I would think both amd and nvidia will have full dx12 support with the upcoming gpus. Even intel has.
data/avatar/default/avatar33.webp
I'd say this is just a filler to make some sales. Not what we're waiting for. (HBM2, 15B transistors)
This is proly the 2nd most important GPU in NV lineup. After GP100. Because of volumes sold at hefty prices in desktop and mobile, in both pro and GTX variant. But how they'll be able to do it with GDDR5 and 256bit, I have no idea. Faster memory modules and better compression are obvious starting points. Hopefully it will make 980Ti completely obsolete. No worries, it will :P Because anything else would be a fail.
data/avatar/default/avatar18.webp
ofc 1070 will be well worth it(again), 980ti performance level for about 350/400$$ and combine that with 8GB gddr5 onboard
data/avatar/default/avatar15.webp
So no more GTX brand for nvidia after all this years, now its just X.
I don't see the problem as the GTX brand has been diluted to the point of being meaningless. It used to be only the high end models were GTX, like *70 and *80 series cards. Now you have GTX 950's out the *ss. Seriously? It's a 950.
https://forums.guru3d.com/data/avatars/m/259/259654.jpg
This is such complete speculation. It all depends on the clocks too. This is more or less a shrunk 980Ti. If it hits 2GHz then it's fine.
https://forums.guru3d.com/data/avatars/m/227/227853.jpg
I don't see the problem as the GTX brand has been diluted to the point of being meaningless. It used to be only the high end models were GTX, like *70 and *80 series cards. Now you have GTX 950's out the *ss. Seriously? It's a 950.
Word. The 950 should not get the GTX badge. But at least they dropped the GT vs Ultra suffixes, those were a bit confusing. It should be as it was with the 500 series. 550 and under was GT, 560 and above was GTX. At least they're not overinflating their numbers like AMD does. 7850, 7870, 7950, 7970, 7990. Versus 650, 660, 670, 680, 690. WTF srsly. It's very confusing and even now I have to look up benchmarks every damn time I want to compare the cards.
https://forums.guru3d.com/data/avatars/m/175/175902.jpg
I haven't seen conclusive proof that HBM actually helps the Fury X to a tangible level. If anyone has such proof, please share it with me.
yes obviously it push to the top, but sadly for us AMD have made terrible choise of HBM1 wich is limited to 4gb... so the perf are crippled due to lack of ramspace (and in some case even the previous gen high end do better). now about the "GDDR5 only" drama: on pict i just see prototype or sample, if it work with GDDR5 then it work with GDDR5X too, it's pin to pin compatible (i can already smell old GPU stock with GDDR5X and a new name... lol). and GDDR5X is clearly nice ram, and cost less than HBM2 (that will be used for high end in red and green flavor for sure). "don't sell bear's skin before getting it first" none GPU from neither company are ready... let them come out and tested 🙂
https://forums.guru3d.com/data/avatars/m/175/175902.jpg
I don't see the problem as the GTX brand has been diluted to the point of being meaningless. It used to be only the high end models were GTX, like *70 and *80 series cards. Now you have GTX 950's out the *ss. Seriously? It's a 950.
GTX mean gaming not only high end at 4K over 100fps 🙂 there is some that doesn't need so much power (and money lol) the gtx950 is aimed at 1080 60fps for Moba and game like WoT etc... and it does this pretty well, and even in some top game from last year it perform really well (i was clearly impress the 1st time i pair it with I3... you get a lot for the money... i understand their success). it is not high end GPU but clearly a GTX.
https://forums.guru3d.com/data/avatars/m/73/73680.jpg
I kinda get the feeling big Pascal GP100 will be only for Tesla GPUs. Those big chips are just too expensive to sell for consumer gear. The DX-1 box costs around $130,000 with 8 of those GPUs. If you cut off around $30K for the other server components, your looking around $12,500 per GPU! I don't think we'll see anything like this chip for consumers until 16nm process is mature. I bet the chip yields are horrible right now, hence why they cost a fortune. I bet the GP104 chip will be a different design entirely. Well still based off the GP100 but with some changes. They will probably kill all the Double Precision (DP) units, games don't really make use of that hardware. That silicon space would be much better utilized for Single Precision (SP) shader cores. Currently half of the silicon space on the GP100 is used for DP units, using that space for SP units would be a much better design for a gaming GPU.