Nvidia Shows and Demos Testla Volta V100 has 5120 Shader processors

Published by

Click here to post a comment for Nvidia Shows and Demos Testla Volta V100 has 5120 Shader processors on our message forum
https://forums.guru3d.com/data/avatars/m/245/245459.jpg
Hang on, did I read that right: "120 Teraflops"!! That's 120,000 Gigaflops!! GTX 1080ti is 11,340 Gigaflops. This card has 10 times more processing - what!!!!???
https://forums.guru3d.com/data/avatars/m/202/202673.jpg
Over 800 mm^2...yikes...
https://forums.guru3d.com/data/avatars/m/261/261821.jpg
Hang on, did I read that right: "120 Teraflops"!! That's 120,000 Gigaflops!! GTX 1080ti is 11,340 Gigaflops. This card has 10 times more processing - what!!!!???
120 Tensor TeraFLOPS of performance, delivered by a new processors called a Tensor Core. Tensor processing units (or TPUs) are application-specific integrated circuits (ASICs) developed specifically for machine learning.
https://forums.guru3d.com/data/avatars/m/245/245459.jpg
120 Tensor TeraFLOPS of performance, delivered by a new processors called a Tensor Core. Tensor processing units (or TPUs) are application-specific integrated circuits (ASICs) developed specifically for machine learning.
Ha, thanks! I've never heard of a Tensor Teraflop before, and google didn't help much. I wonder how this will translate into the Gigaflops that we know & love for our consumer GPUs?
https://forums.guru3d.com/data/avatars/m/16/16662.jpg
Administrator
Ha, thanks! I've never heard of a Tensor Teraflop before, and google didn't help much. I wonder how this will translate into the Gigaflops that we know & love for our consumer GPUs?
It's 15 TFLOPS for Gaming
https://forums.guru3d.com/data/avatars/m/198/198862.jpg
Wait, didnt they said nvidia next gen gpu's will use gddr6 and this shows they are going the same route as AMD and hbm2?
https://forums.guru3d.com/data/avatars/m/261/261821.jpg
It's 15 TFLOPS for Gaming
Just for random thought: 1170(volta) 11 tflops 1180(volta) 13 tflops 1180ti/titan 15 tflops
https://forums.guru3d.com/data/avatars/m/54/54823.jpg
It's 15 TFLOPS for Gaming
A golden sample TitanXp with a shunt mod could reach that
https://forums.guru3d.com/data/avatars/m/261/261821.jpg
Wait, didnt they said nvidia next gen gpu's will use gddr6 and this shows they are going the same route as AMD and hbm2?
Their previous versions of these cards(pascals) used HBM also. These arent for gamers
https://forums.guru3d.com/data/avatars/m/250/250418.jpg
Hang on, did I read that right: "120 Teraflops"!! That's 120,000 Gigaflops!! GTX 1080ti is 11,340 Gigaflops. This card has 10 times more processing - what!!!!???
That's related to "tensor" processing for deep learning. 1080Ti Die size: 471mm2 Cuda cores: 3584 Floating point: 10.6 TFlops MSRP: $699 Volta Die size: 815mm2 Cuda cores: 5120 Floating Point: 15 TFlops MSRP: Very Very expensive due to die size.
120 Tensor TeraFLOPS of performance, delivered by a new processors called a Tensor Core. Tensor processing units (or TPUs) are application-specific integrated circuits (ASICs) developed specifically for machine learning.
I was beaten.
It's 15 TFLOPS for Gaming
The letters on the images are very small and hard to read, thanks for confirming my guesses. So Nvidia is going forward with the bigger and more expensive approach... I hope AMD stays low on die size or I won't be able to afford a GPU in a couple of years...
https://forums.guru3d.com/data/avatars/m/54/54823.jpg
So Nvidia is going forward with the bigger and more expensive approach... I hope AMD stays low on die size or I won't be able to afford a GPU in a couple of years...
When it comes to AI they will make so much damn money it won't even matter.
data/avatar/default/avatar05.webp
Wait, didnt they said nvidia next gen gpu's will use gddr6 and this shows they are going the same route as AMD and hbm2?
GDDR6 was just officially announced. It will take a 6 months or more before it gets beyond sampling and into production, which puts the first gaming cards with GDDR6 at a timeframe of mid to late Q1/18. This is the same thing as people seeing Samsung announce a new "x" Gbit RAM module in January, and them expecting to see it in their next Galaxy S in March - that's not how things work.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
That's related to "tensor" processing for deep learning. 1080Ti Die size: 471mm2 Cuda cores: 3584 Floating point: 10.6 TFlops MSRP: $699 Volta Die size: 815mm2 Cuda cores: 5120 Floating Point: 15 TFlops MSRP: Very Very expensive due to die size. I was beaten. The letters on the images are very small and hard to read, thanks for confirming my guesses. So Nvidia is going forward with the bigger and more expensive approach... I hope AMD stays low on die size or I won't be able to afford a GPU in a couple of years...
Nvidia is most likely doing what I predicted and completely splitting it's AI/Deep learning GPU's off from it's gaming oriented ones. GP100 uses completely different cores/layout/memory/etc. Volta is just taking that a step further. The market for that stuff is going to surpass their gaming segment one day and it makes sense for them to do that. Especially because the margins on those products are insane.
data/avatar/default/avatar11.webp
Volta will be HUGE alright. That means already 1160 will be a bigger chip than 1070/80gtx 1170/1180 will be bigger than 1080ti/titanxp and 1180ti/titanxv will be bigger than tesla(P100) I am feeling great gap in size and even greater gap in performance. Let's see.
https://forums.guru3d.com/data/avatars/m/250/250418.jpg
I am feeling great gap in size and even greater gap in performance. Let's see.
I'm feeling a even greater gap in price :banana: There's so much dies you can make in a wafer, if those dies are huge their price will be absurd. I won't be going Nvidia anytime soon...
data/avatar/default/avatar40.webp
So Nvidia is going forward with the bigger and more expensive approach... I hope AMD stays low on die size or I won't be able to afford a GPU in a couple of years...
Yep. Even when they shave the die size down for commercial products, it's still going to be huge. For example, if they shrink V100 down by the ratio that the 1080 was to P100, it's still larger than GP102 in the 1080Ti.
data/avatar/default/avatar24.webp
I kindly believe prices will go up for a maximum of 200$ for titan, 150$ for TI, 100$ for x80, 75$ for x70 and 50$ for x60. Meaningless differences I suppose.
https://forums.guru3d.com/data/avatars/m/248/248902.jpg
So Nvidia is going forward with the bigger and more expensive approach... I hope AMD stays low on die size or I won't be able to afford a GPU in a couple of years...
800mm that's crazy. They must have very high confidence in the fab or are willing to ask a kidney from consumers.
data/avatar/default/avatar34.webp
People are confusing Tesla which is more of a data server thing with nvidia's usual GTX graphic card lineup. Also the Tesla P-100 (pascal based) already has HBM2 and already available for about 5 grand so them using HBM2 is not a new thing and doesn't mean Volta based GTX will use it.