Nvidia announces Tesla P100 data-center GPU

Published by

Click here to post a comment for Nvidia announces Tesla P100 data-center GPU on our message forum
https://forums.guru3d.com/data/avatars/m/164/164033.jpg
Pretty good stuff to super computers in 2017. I kind of wish amd would do similar cards but it must be stupid hard to enter that market when nvidia has saturated it already.
data/avatar/default/avatar34.webp
170 TFlops of FP16... 85 Tflops FP32, and thats already good. Dont forget it is dedicated to Deeplearning.
Darn it, I got duped I went something like 8 x 20 + ***949; = 170, forgetting its only 10 TFLOPS FP32. But clever as they are, Nvidia has already taken care of that by emphasizing it's Deep Learning Supercomputer.
Pretty good stuff to super computers in 2017. I kind of wish amd would do similar cards but it must be stupid hard to enter that market when nvidia has saturated it already.
By saturated, if you're talking about super-computing market as in TOP 500, that's not particularly high volume market, and hence not as lucrative as it might seem. Not worth targeting specifically for the likes of Intel/NV/AMD. But deep learning is far from saturated, it is an emerging market. Everyone who has a chance needs to get on board. Because it will be huge once it explodes. And it will explode because there are so many areas untouched, and so many possibilities for innovation. ATM deep learning market is constrained by processing power and hardware in general algorithm performance overall deep learning methodology deficiencies. untapped fields of opportunity Common sense says all these areas are guaranteed to grow, and once the train starts moving 4. will grow exponentially. Deep learning is the next big ****!
https://forums.guru3d.com/data/avatars/m/164/164033.jpg
Darn it, I got duped I went something like 8 x 20 + ***949; = 170, forgetting its only 10 TFLOPS FP32. But clever as they are, Nvidia has already taken care of that by emphasizing it's Deep Learning Supercomputer. By saturated, if you're talking about super-computing market as in TOP 500, that's not particularly high volume market, and hence not as lucrative as it might seem. Not worth targeting specifically for the likes of Intel/NV/AMD. But deep learning is far from saturated, it is an emerging market. Everyone who has a chance needs to get on board. Because it will be huge once it explodes. And it will explode because there are so many areas untouched, and so many possibilities for innovation. ATM deep learning market is constrained by processing power and hardware in general algorithm performance overall deep learning methodology deficiencies. untapped fields of opportunity Common sense says all these areas are guaranteed to grow, and once the train starts moving 4. will grow exponentially. Deep learning is the next big ****!
Hmmmh yea was talking about supercomputers. Maybe AMD will provide something to deep learning they have the computing power to put out there.
https://forums.guru3d.com/data/avatars/m/235/235224.jpg
What a beast of a GPU, wonder what the GeForce variant will be like.
https://forums.guru3d.com/data/avatars/m/80/80129.jpg
Compute Preemption is another important new hardware and software feature added to GP100 that allows compute tasks to be preempted at instruction-level granularity, rather than thread block granularity as in prior Maxwell and Kepler GPU architectures. Compute Preemption prevents long-running applications from either monopolizing the system (preventing other applications from running) or timing out. Programmers no longer need to modify their long-running applications to play nicely with other GPU applications. With Compute Preemption in GP100, applications can run as long as needed to process large datasets or wait for various conditions to occur, while scheduled alongside other tasks. For example, both interactive graphics tasks and interactive debuggers can run in concert with long-running compute tasks.
So I guess Pascal correctly supports Async now.
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
So I guess Pascal correctly supports Async now.
I would have thought they do something l ike Asynch, not necessarily for gaming purposes, but maybe in CUDA applications. But I'm not good enough with that in debth techi talk to be sure whatever they write about in there. 😀
https://forums.guru3d.com/data/avatars/m/169/169957.jpg
I would have thought they do something l ike Asynch, not necessarily for gaming purposes, but maybe in CUDA applications. But I'm not good enough with that in debth techi talk to be sure whatever they write about in there. 😀
Async already works in Cuda, no barriers though
https://forums.guru3d.com/data/avatars/m/258/258664.jpg
Async already works in Cuda, no barriers though
Yeah I know it's not impossible and everything, that's why I'm quite fed up with the topic. Look at the performance and numbers, no matter what's behind it, as long as games run well on cards... But I can't help but to admit I'm not even sure what you mean with no barriers 😀
https://forums.guru3d.com/data/avatars/m/169/169957.jpg
So I guess Pascal correctly supports Async now.
this isn't really async, this is more fine-grained preemption; pascal still lacks the functionality of ACEs
Yeah I know it's not impossible and everything, that's why I'm quite fed up with the topic. Look at the performance and numbers, no matter what's behind it, as long as games run well on cards... But I can't help but to admit I'm not even sure what you mean with no barriers 😀
yeah performance is what matters outside of VR