Nvidia drops Samsung and uses TSMC for Pascal
So we already mentioned would very likely be made on TSMC 16nm FiNFET. The target release date for this GPU is 2016. The product would be released as GP100 and will be the succesor to the GM200 series GPUs. As it now turns out Samsung indeed was dropped.
Previously, Samsung Electronics competed with TSMC to win a contract to produce the Pascal GPU. According to industry sources on Sept. 15, Nvidia decided to let TSMC mass produce the Pascal GPU, which is scheduled to be released next year, using the production process of 16-nm FinFETs. Some in the industry predicted that both Samsung and TSMC would mass produce the Pascal GPU, but the U.S. firm chose only the Taiwanese firm in the end. Since the two foundries have different manufacturing process of 16-nm FinFETs, the U.S. tech company selected the world's largest foundry for product consistency.
Samsung has strengthened its competitiveness using the production process of 14-nm FinFETs, which it introduced before TSMC did. In particular, the fact that the A9 processors that are featured in the iPhone 6s were produced using the Korean company's production process of 14-nm FinFETs worried the Taiwanese firm's executives. However, the selection of TSMC, which has been Nvidia's partner company for 20 years, is a setback for Samsung, which anticipated a dramatic turnaround in the foundry market.
The reason for Samsung's determination to win the contract for the Pascal GPU lies in the fact that Nvidia's new GPU is highly likely to mark a milestone in the next-gen graphic market.
Experts are saying that Samsung's failure to obtain the contract is mainly attributable to its lack of experience. The fact that the Korean tech giant has become TSMC's rival only two years after it started to produce GPUs itself is considered to have special meaning at the moment.
- See more at: http://www.businesskorea.co.kr/article/ict/12092/no-samsung-gpu-samsung-electronics-fails-win-contract-nvidias-pascal-gpu#sthash.nDSW53sc.dpuf
With the pascal architecture Nvidia as well will make a move towards HBM memory (stacked on die memory), however they will jump right towards generation v2 of HBM, meaning they could inject up-to 32GB pf memory on the GPU. The new details show that the flagship NVIDIA Pascal (Single chip) would feature up to 16 GB HBM2 VRAM that can do 1 TB/s bandwidth.
It is estimated that ther shader processor count for big pascal is anything from 4500 towards 6000 units. Big Pascal made on TSMC's 16 nanometer silicon fab process with a release in Q1 2016.
Nvidia Disables Overclocking for Series 900 Mobile GPUs - 02/12/2015 06:53 PM
Nvidia once again managed to irritate a lot of people, this time the latest driver update disables overclocking for the mobile Series 900. Until recently these parts could be overclocked....
NVIDIA DRIVE Automotive Computers - 01/05/2015 10:49 AM
Transporting the world closer to a future of auto-piloted cars that see and detect the world around them, NVIDIA today introduced NVIDIA DRIVE automotive computers -- equipped with powerful capabiliti...
NVIDIA does $5 million deal with Ubisoft - 10/02/2013 12:39 PM
It looks like NVIDIA spend $5 million to optimize Ubisoft's Assassin's Creed 5 and Watch Dogs. In fact AMD also is rumored to have invested a similar sum in Battlefield 4, exact details about these...
NVIDIA delays Tegra 4 in favor of Tegra 4i - 05/17/2013 08:30 AM
NVIDIA decided to delay the introduction of its Tegra 4 SoC to speed up the arrival of Tegra 4i. The latter part features an integrated LTE baseband modem, a feature the firm desperately lacks in its ...
Vulnerability detected in NVIDIA display driver service - 12/27/2012 08:37 AM
A hacker called Peter Winter-Smith discovered a security hole in NVIDIA's display driver service that allows local and remote users (Windows firewall/file sharing permitting) to gain administrator pr...
Senior Member
Posts: 3297
Joined: 2013-03-10

Yeah, I wasn't actually thinking about their capabilities as much as the schedules. It has been a while since the Maxwells first appeared, whereas AMD's Fiji only appeared this summer, and we are still getting new cards like Nano and possibly 380X. I was just thinking it would be strange if they were rendered obsolete half a year from now.
But then again, if you consider the current Fury cards only curiosities and proofs of concept, then it wouldn't matter, as the 300 series itself is more or less old stuff in new clothes. This extra 28nm generation really was a pity.
Senior Member
Posts: 7824
Joined: 2005-08-10
Even though it's still a long wait ahead, I'm looking forward to Pascal, my 970s will have to do til then (even though I had the itch to upgrade for a while lol, but the cost-performance ratio is just too bad atm).
Senior Member
Posts: 285
Joined: 2013-11-21
AMD's first-gen experience with HBM will be invaluable with its second-gen HBM products--nVidia gets 0 points for skipping the first generation--probably more like a -10 points, I'd say.
I don't really think the catch up is going to be particularly relevant, to my understanding HBM defeats a bottleneck along with a few other little things ... but we haven't hit that bottleneck yet. So unless the bottleneck is hit in the next line of new manufacture nodes (or whatever) AMD have the upper hand in a technology that only has minimal use.
Senior Member
Posts: 14009
Joined: 2004-05-16
The biggest gain from HBM was the power savings. If you took a 980Ti and put HBM on it, it would be a 225w card. You could then potentially clock it higher given the power headroom. The second biggest gain would probably be form factor honestly. As you said memory bandwidth isn't really that much of a bottleneck at the moment -- unless you're running 4K with AA.
HBM2 is a JEDEC standard now, so any memory company could develop HBM2 modules. AMD is paired with SK Hynix so they have an advantage there. I don't know who Nvidia is pairing with for HBM. There is no exclusivity deal despite what some people say. The advantage is in the fact that AMD/SK essentially wrote the rulebook on HBM -- every other company including Nvidia still needs to figure it out and build it. Building it seems to be the hardest part. Who would have known that growing 10,000+ nano crystals per stack would be difficult? Then you need to fuse it to an interposer.
The other advantage AMD has is that it may possibly produce it's next generation GPU's on Samsung/GF's 14nm process. Samsung's 14nm does have a slight density advantage over TSMC's 16FF+.
Senior Member
Posts: 1432
Joined: 2014-07-22
Your first couple of sentences don't make much sense...
More importantly for nVidia is catching up to where AMD currently is in hardware D3d12 support...This will become far more evident as D3d12 benchmarks and games appear...At that point, Brian Burke will have some explaining to do--but when does he not?...