NVIDIA Pascal GP104 Die Photo
Last week Nvidia announced the Tesla P100 data-center GPU and if you looked closely at the photo's you could already clearly see that big Pascal 15B transistor GPU being used. A new photo this time showing the GP104 GPU (successor to the GM204 = GTX 970/980).
According to the leak the photo shown below is the GP104 (GM204 successor) intended for Nvidia's pending high-end products, and it measures roughly 290-300mm². At the right side you can see Samsung K4G80325FB - 1.5V 8Gb 8Gbps (8000MHz) GDDR5. Word on the street is that this GPU would hold 2560 shader processors.
The rectangular die of the GP104 was measured at 15.35 mm x 19.18 mm which should house (very speculative) a transistor-count of 7.4~7.9 billion. Interesting to see is that this chip is tied towards Samsung 8-gigabit GDDR5 memory chips. These ICs run an effective speed of 8 GHz (GDDR5 and thus not GDDR5X). At 256-bit you'ld be looking at 256 GB/s of bandwidth. The shader processor core count of the GP104 could be closer to 2,560, than the 4,096 from an older report.
Earlier on it was speculated that graphics cards based on these GPUs would be called X70 and X80, the specs do not meet up even slightly though.
Judge rules that Samsung and Qualcomm do not violate Nvidia patents - 10/12/2015 09:10 AM
According to a judge ruling from the International Trade Commission, Samsung and Qualcomm do not violate Nvidia patents. Nvidia released this news themselves on their blog. Nvidia will appeal this r...
AMD and Nvidia prep for next-gen DirectX 12 - 03/24/2015 11:19 AM
Several new sets of slides from AMD and Nvidia as well surfaced on the webs through slideshare, the decks show what the companies have been up to in regards to DirectX 12, and they do look pretty exc...
Nvidia PhysX Source Code Now Available Free - 03/06/2015 09:03 AM
NVIDIA today put more than a decade of research, development and investment in gaming physics into the hands of game developers – by offering free source code for NVIDIA PhysX on GitHub. Thi...
New Nvidia PhysX FleX features - 12/11/2014 10:20 AM
PhysX FleX is a particle based simulation technique for real-time visual effects. So far we showed examples for rigid body stacking, particle piles, soft bodies and fluids. In this video we'll be sho...
Ubisoft and NVIDIA Partner Up for Assassin's Creed Unity Far Cry 4 And More - 06/05/2014 09:03 AM
Ubisoft and NVIDIA today announced the next chapter in their strategic partnership bringing amazing PC gaming experiences to life in Ubisoft's highly anticipated upcoming titles including Assassin's...
Senior Member
Posts: 7836
Joined: 2010-08-28
Can't come soon enough, my 680 served me well for 4 years.
Senior Member
Posts: 954
Joined: 2010-08-24
I am disappointed if we would not see GDDR5X but at the same time I'm questioning if in the end it's actually needed. Even an overclocked-to-hell 980Ti (only core OC) won't get bandwidth starved any time soon.
Let's be serious, the hype around HBM was retarded - I cannot describe it any other way. Crap like '4GB HBM = 8GB GDDR5' or 'HBM will make 4k possible on one single card'. What the fek seriously. Sounds like the same people who say that DDR4 made their PCs much faster (no IGP).
I haven't seen conclusive proof that HBM actually helps the Fury X to a tangible level. If anyone has such proof, please share it with me.
What makes you say that? Genuinely curious.
I believe it all depends on how they implement DX12 features.
Atm AMD have an advantage in DX12 because everybody seems happy to jump to async shading but we haven't touched conservative rasterization yet, something GCN does not support.
If these guys (AMD/Nvidia) don't start supporting conservative raster and async shading respectively, we will see a huge ****fest similar to what's going on right now DX12-wise. Devs will have to side with one party and ditch the other.
Occasionally we might have the amazing dev which will actually take the time to optimize properly for both parties (for example coding volumetric lighting to take advantage of either conservative raster or async shading at will). Like I said. Sh!tfest.
Member
Posts: 51
Joined: 2015-08-08
I'd say this is just a filler to make some sales. Not what we're waiting for. (HBM2, 15B transistors)
Senior Member
Posts: 12980
Joined: 2014-07-21
If these guys (AMD/Nvidia) don't start supporting conservative raster and async shading respectively, we will see a huge ****fest similar to what's going on right now DX12-wise. Devs will have to side with one party and ditch the other.
Occasionally we might have the amazing dev which will actually take the time to optimize properly for both parties (for example coding volumetric lighting to take advantage of either conservative raster or async shading at will). Like I said. Sh!tfest.
I tend to agree.
Senior Member
Posts: 902
Joined: 2007-09-03
Hm, going to be interesting to see what this turns out to be. Seems a bit underwhelming?