Nvidia announces Tesla P100 data-center GPU
Click here to post a comment for Nvidia announces Tesla P100 data-center GPU on our message forum
Fox2232
Yep, and he also announced that Tesla P100 has 600mm^2.
That chances them making it work was "0",... , nada, no way of making it. (to make it look more amazing, that they built something what is impossible to make)
He is so inconsistent and irrational as ever. Especially his insulting way of talking to audience like to idiots.
Saying that 384 bit Maxwell has only 384 traces for memory against 4000 ones going to Pascal's HBM2.
Considering pinout of one GDDR5, he is very wrong, but he makes such statement to make next thing look bigger.
schmidtbag
Seems there's a typo for the M40 "FP64 CUDA Cores / SM" and "FP64 CUDA Cores / GPU" rows.
Anyway assuming this is stable, this will be some pretty impressive hardware. But considering the sheer amount of technical hurdles this accomplished, I wouldn't feel too comfortable using this for any mission-critical datacenters. There is too much risk for failure for something this different.
Anyway it's nice to see nvidia supporting HBM2, but it does get me to wonder how AMD will fare against that. HBM was a pretty big win for AMD but with nvidia having access to it too, Polaris has some catching up to do.
cowie
Fox2232
BLEH!
This the NV-link thing?
cowie
Noisiv
"Parallel to the P100 announcement Nvidia is announcing the DGX-1, a deep learning super computer.
It holds two Xeon processors and a lovely eight Tesla P100 units each holding 16GB of HBM2 memory.
Priced at only $129,000, but it is considered to be a super-computer."
170 TFLOPS almost gets you a spot in the latest TOP 500 LIST
so even technically it should be considered a supercomputer. Within a mere 3U rack.
http://abload.de/img/1img_188668zu2.png
Fox2232
^ It surely is impressive. I do not mean performance, that's OK, but Density is impressive. I wonder what is performance per watt there.
Price is kind of not nice. 170TFLOPS for $129k. As I look at Radeon Pro Duo having 15TFLOPS for $1,5k. If only AMD had it with more vram... Well, next generation.
Anyway, 20 TFLOPS at 15B transistors... It's good as Fiji has only 8.6 TFLOPS with 8.9B Transistors. Only question is how that translates to Consumer type of Pascal and at what clock P100 ticks.
cowie
that's good and all maybe finding a cure for cancer but the thing cant even do 3dvantage at least not the 3d part !@#$ it cant even do crisis
all kidding aside 10 get you into the top 25 ish thats just nuts
Ieldra
Fox2232
Noisiv
http://ark.intel.com/products/series/75809/Intel-Xeon-Phi-Coprocessor-7100-Series#@Server
it doesn't
90% this is HPC only due to existance of NVLink and huge investment into DP.
x2 more transistors than GM200 for mere 16% more Cuda Cores says this chip does not care about SP - at all.
As far as I know AMD is not even a player in deep learning,
and as for Intel - looks like their future Knights Landing is gonna get thrashed in compute raw specs:
3 TFLOPS FP64 vs 5,10,20 TFLOPS (FP64,FP32,FP16)
300W might look worrying, but then again why not:
Shadowdane
Noisiv
https://devblogs.nvidia.com/parallelforall/wp-content/uploads/2016/04/8-GPU-hybrid-cube-mesh-624x424.png
not quite. PCI express is still used, for example in DGX1
"While NVLink primarily focuses on connecting multiple NVIDIA Pascal GP100 GPUs together
it can also connect Pascal GP100 GPUs with IBM Power CPUs with NVLink support. "
Ieldra
PrMinisterGR
Trusting the NVIDIA CEO 100% is akin to a Darwin Award. It's like believing AMD about Crossfire support.
Ieldra
Musouka
Ieldra
Lane