NVIDIA GP100 Pascal Silicon Reportedly Spotted

Published by

Click here to post a comment for NVIDIA GP100 Pascal Silicon Reportedly Spotted on our message forum
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
That Price Tag is Cute if anyone remembers those sample shipping to release price correlations from past...
https://forums.guru3d.com/data/avatars/m/237/237771.jpg
If its gonna be that "slow" they can keep it. For such massive spec it should be at least min 50% faster.
The 1070 or what ever it will be called wil not be a GP100 it will likely be half the size. And I expect it to be about 10-20% faster than the 980Ti but around $400.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
I'm Denying these facts 😀
Lol. The problem is in the cost. Look at the yields for Fury X, they clearly aren't very good. Couple that with it being the first tri-gate GPU, 16nm being more expensive and it's Nvidia's first HBM/Interposer project on a newish architecture -- and yeah, all those things are going to prevent Nvidia from shipping a huge 17B transistor GPU. If we look at history Keplers launch @ 28nm was only ~5M more transistors than Fermi @ 40nm.
https://forums.guru3d.com/data/avatars/m/218/218363.jpg
Lol. The problem is in the cost. Look at the yields for Fury X, they clearly aren't very good. Couple that with it being the first tri-gate GPU, 16nm being more expensive and it's Nvidia's first HBM/Interposer project on a newish architecture -- and yeah, all those things are going to prevent Nvidia from shipping a huge 17B transistor GPU. If we look at history Keplers launch @ 28nm was only ~5M more transistors than Fermi @ 40nm.
Jokes aside I suppose you're right but I suppose time will tell. We can all but hope right 🙂 I think we'd need something 50% faster than 980Ti to make 4K a viable option and not just a gimmick that suffers from poor framerates if one wants Ultra detail in a game. Though the holy grail of 4K at 120Hz is still far away.
https://forums.guru3d.com/data/avatars/m/242/242471.jpg
Lol. The problem is in the cost. Look at the yields for Fury X, they clearly aren't very good. Couple that with it being the first tri-gate GPU, 16nm being more expensive and it's Nvidia's first HBM/Interposer project on a newish architecture -- and yeah, all those things are going to prevent Nvidia from shipping a huge 17B transistor GPU. If we look at history Keplers launch @ 28nm was only ~5M more transistors than Fermi @ 40nm.
And yet Gf110 was around 2x slower then full gk110.. Why wouldnt full gp100 chip be 2x faster then gm200 too. Gk110 vs gm200 couldnt since it was limited @28nm. And a little cripled 1070gtx variant 50 60% faster then last gen..
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
And yet Gf110 was around 2x slower then full gk110.. Why wouldnt full gp100 chip be 2x faster then gm200 too. Gk110 vs gm200 couldnt since it was limited @28nm. And a little cripled 1070gtx variant 50 60% faster then last gen..
A few reasons, the biggest being cost. 40nm to 28nm resulted in pretty large cost reduction per transistor about 35% cheaper. 28nm to 16nm actually increases the cost about 13% over 28nm and the wafer is twice as expensive. I've seen estimates as high as 3x the cost to develop a 16nmFF GPU over 28nm planar. The second biggest I'd say is architecture. Kepler was, architecturally, a pretty big jump over Fermi. Pascal really isn't. Nvidia is focusing more on HBMv2/Mezzanine/FP16 Mixed/NVLink more than straight up architecture improvements. They are essentially going to rely on the process switch for power/performance improvements. Kepler focused on both. The third and honestly I'd say tied with second is lack of competition. Remember that Fermi was the card Charlie said was "Too hot and unmanufacturable". The 480/580 were hot cards and they were competing with 28nm AMD counterparts towards the end of their lifecycle. AMD had room at the time to make a bigger chip, Nvidia didn't. This isn't really a problem for Nvidia at the moment, it's actually the opposite. Nvidia can put forth smaller cards next gen, push the boundary a bit, reap the high margins and just make something much larger when AMD's new architecture hits the table. I will most definitely say there is no way they'd sell a GTX1070 @ 50% faster than a 980Ti. At most it will be 15% faster. With the GTX1080 being at most 25% faster. Volta is pushed to 2018 so you know Nvidia isn't going to build the full Pascal chip until 2017.
Jokes aside I suppose you're right but I suppose time will tell. We can all but hope right 🙂 I think we'd need something 50% faster than 980Ti to make 4K a viable option and not just a gimmick that suffers from poor framerates if one wants Ultra detail in a game. Though the holy grail of 4K at 120Hz is still far away.
Yeah idk, I don't think we will see 50% faster till the 1180 or the 1080Ti or w/e they are going to call Big Pascal. I have a few friends that play 4K on Ti's now, most of them don't mind turning a few settings down to hit 60fps. I could see a 1080 @ 25% faster eliminating the need for turning the settings down (especially with G-Sync) but 4K @ 120Hz I think is a long time off, in terms of games hitting that.
https://forums.guru3d.com/data/avatars/m/134/134194.jpg
I figure my 980ti will give be good for 1080p gaming until 2nd gen HBM NVidia or 3rd gen HBM AMD then all early HBM yield problems will be gone and DX12 games will be standard and then it will be upgrade time. At 1080p I don't see the 980ti struggling for a while.
https://forums.guru3d.com/data/avatars/m/242/242471.jpg
Yeah idk, I don't think we will see 50% faster till the 1180 or the 1080Ti or w/e they are going to call Big Pascal. I have a few friends that play 4K on Ti's now, most of them don't mind turning a few settings down to hit 60fps. I could see a 1080 @ 25% faster eliminating the need for turning the settings down (especially with G-Sync) but 4K @ 120Hz I think is a long time off, in terms of games hitting that.
Well I was speaking of big full fat GP100 chip there, not this midrange bs like GF104/114, GK104 or GM 204.. But yeah nv really likes this midrange sell as high end trend lately so I think you're right about 1070GTX being just 20 maaybe 30% faster then full GM200 chip.
https://forums.guru3d.com/data/avatars/m/206/206905.jpg
I will be going for the full fat non Ti version as soon as it becomes available. Straight to custom water 🙂
https://forums.guru3d.com/data/avatars/m/237/237771.jpg
And yet Gf110 was around 2x slower then full gk110.. Why wouldnt full gp100 chip be 2x faster then gm200 too. Gk110 vs gm200 couldnt since it was limited @28nm.
Well yeah they were limited by the die size GK110 to GM200 was still impressive none the less. Around 80% faster for the same TDP.
Well I was speaking of big full fat GP100 chip there, not this midrange bs like GF104/114, GK104 or GM 204.. But yeah nv really likes this midrange sell as high end trend lately so I think you're right about 1070GTX being just 20 maaybe 30% faster then full GM200 chip.
You can blame AMD for that.
https://forums.guru3d.com/data/avatars/m/243/243702.jpg
Well I was speaking of big full fat GP100 chip there, not this midrange bs like GF104/114, GK104 or GM 204.. But yeah nv really likes this midrange sell as high end trend lately so I think you're right about 1070GTX being just 20 maaybe 30% faster then full GM200 chip.
I think that 1070 outperforming 980Ti even by 20% edges with land of fairy tales. Especially since "Bold Text". And if it does, then expect damn high price premium since 16nm is not cheaper than 28nm, and I do not see nVidia cutting their profits just for customer to have product which will last him few more years and consequently reducing their profits again. My humble prognosis is 1080Ti 20~25% stronger than 980Ti only, same goes for 1070 vs. 970. And I can see nVidia bringing those big chips into Titan or similar extra card where performance of single chip will easily match performance of SLI 980Ti. (but for price...) In general we tend to think that performance/watt will make huge jump, but I am not sure here. Did we saw some magic from 16nm mobile chips compared to 22/28nm? While those chips operate under most effective conditions (lower voltage, frequencies) we do not see drastic increase in performance or battery life over last generation. Looking on Samsung process jump, it is not that magical: https://en.wikipedia.org/wiki/Exynos Move from 20nm to 16nm allowed Exynos 5433 to become Exynos 7420. CPU part exactly same, just frequency bump by 10%, GPU beefed by 33% in size and 10% in clock. Memory speed doubled. 1.1 * 1.33 = 1.463 that is 46.3% increased graphical power. But question is why did it came mostly from increased number of transistors (increased manufacturing cost) and not from pure frequency boost (just 10% over 20nm). I would say: because effective clock range did not change much from 20nm And since desktop GPUs are not operating anywhere near effective clocks for those precesses 16/14nm may not bring huge perf/watt improvement. (I know that nVidia got to good perf/watt due to lower transistor density on 28nm and AMD did got perf/watt on Fiji with lower clocks moving closer to efficient clock range of process)
https://forums.guru3d.com/data/avatars/m/197/197287.jpg
I think that 1070 outperforming 980Ti even by 20% edges with land of fairy tales. Especially since "Bold Text". And if it does, then expect damn high price premium since 16nm is not cheaper than 28nm, and I do not see nVidia cutting their profits just for customer to have product which will last him few more years and consequently reducing their profits again.
I'm not sure where you have gained this idea from considering the track record of previous years? Your ideas, especially dealing with how much "new" the new cards are doing, HBM included, doesn't really hold any "logical" sense.
https://forums.guru3d.com/data/avatars/m/237/237771.jpg
I'm not sure where you have gained this idea from considering the track record of previous years? Your ideas, especially dealing with how much "new" the new cards are doing, HBM included, doesn't really hold any "logical" sense.
He forgets this is a new architecture like Kepler to Maxwell. The GM204 gimped matched full GK110. My guess is that GP104 will beat full GM200 (aka Titan X). Nvidia claims Maxwell to Pascal will be a larger jump than Kepler to Maxwell.
https://forums.guru3d.com/data/avatars/m/124/124168.jpg
I said before the 1070 will likely match a 980ti for 1/2 the price, a 1080 is going to a bit faster than a 980ti again for a much lower price. The next titan is up next most likely. Then the non titan big guns come out, GP 100 is what I am looking out for. Wanna try hbm 2 out with 8gb of vram. Just following past releases on kepler and maxwell. There is a definite pattern here.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
I'm not sure where you have gained this idea from considering the track record of previous years? Your ideas, especially dealing with how much "new" the new cards are doing, HBM included, doesn't really hold any "logical" sense.
He forgets this is a new architecture like Kepler to Maxwell. The GM204 gimped matched full GK110. My guess is that GP104 will beat full GM200 (aka Titan X). Nvidia claims Maxwell to Pascal will be a larger jump than Kepler to Maxwell.
If by 1070 he means a cut down GP204, then I agree with him. 1070 will most likely be around 10% faster than a 980Ti and 1080 (Full GP104) will most likely be around 20-25% faster then a 980Ti. But as --TK-- said the pricing will likely be lower. With the 1080 coming in around $550-600. You guys have to remember this is literally the first time ever that the process the GPU's are switching too is more expensive than the previous one. Nvidia really can't build a much larger GPU without significantly increasing the cost. Most of the performance gains are going to come from architecture improvements and faster switching speed from the node shrink itself. Once the process matures and the prices drop, they will ship full GP110 (probably in 2017) and that's when you'll get 50-60% improvements over current cards.