NVIDIA Announces Tesla T4 Tensor Core GPU
Fueling the growth of AI services worldwide, NVIDIA today launched an AI data center platform that delivers the industry’s most advanced inference acceleration for voice, video, image and recommendation services.
The NVIDIA TensorRT Hyperscale Inference Platform features NVIDIA Tesla T4 GPUs based on the company’s breakthrough NVIDIA Turing™ architecture and a comprehensive set of new inference software.
Delivering the fastest performance with lower latency for end-to-end applications, the platform enables hyperscale data centers to offer new services, such as enhanced natural language interactions and direct answers to search queries rather than a list of possible results.
“Our customers are racing toward a future where every product and service will be touched and improved by AI,” said Ian Buck, vice president and general manager of Accelerated Business at NVIDIA. “The NVIDIA TensorRT Hyperscale Platform has been built to bring this to reality — faster and more efficiently than had been previously thought possible.”
Every day, massive data centers process billions of voice queries, translations, images, videos, recommendations and social media interactions. Each of these applications requires a different type of neural network residing on the server where the processing takes place.
To optimize the data center for maximum throughput and server utilization, the NVIDIA TensorRT Hyperscale Platform includes both real-time inference software and Tesla T4 GPUs, which process queries up to 40x faster than CPUs alone.
NVIDIA estimates that the AI inference industry is poised to grow in the next five years into a $20 billion market.
Industry’s Most Advanced AI Inference Platform
The NVIDIA TensorRT Hyperscale Platform includes a comprehensive set of hardware and software offerings optimized for powerful, highly efficient inference. Key elements include:
- NVIDIA Tesla T4 GPU – Featuring 320 Turing Tensor Cores and 2,560 CUDA® cores, this new GPU provides breakthrough performance with flexible, multi-precision capabilities, from FP32 to FP16 to INT8, as well as INT4. Packaged in an energy-efficient, 75-watt, small PCIe form factor that easily fits into most servers, it offers 65 teraflops of peak performance for FP16, 130 teraflops for INT8 and 260 teraflops for INT4.
- NVIDIA TensorRT 5 – An inference optimizer and runtime engine, NVIDIA TensorRT 5 supports Turing Tensor Cores and expands the set of neural network optimizations for multi-precision workloads.
- NVIDIA TensorRT inference server – This containerized microservice software enables applications to use AI models in data center production. Freely available from the NVIDIA GPU Cloud container registry, it maximizes data center throughput and GPU utilization, supports all popular AI models and frameworks, and integrates with Kubernetes and Docker.
Supported by Technology Leaders Worldwide
Support for NVIDIA’s new inference platform comes from leading consumer and business technology companies around the world.
“We are working hard at Microsoft to deliver the most innovative AI-powered services to our customers,” said Jordi Ribas, corporate vice president for Bing and AI Products at Microsoft. “Using NVIDIA GPUs in real-time inference workloads has improved Bing’s advanced search offerings, enabling us to reduce object detection latency for images. We look forward to working with NVIDIA’s next-generation inference hardware and software to expand the way people benefit from AI products and services.”
Chris Kleban, product manager at Google Cloud, said: “AI is becoming increasingly pervasive, and inference is a critical capability customers need to successfully deploy their AI models, so we’re excited to support NVIDIA’s Turing Tesla T4 GPUs on Google Cloud Platform soon.”
More information, including details on how to request early access to T4 GPUs on Google Cloud Platform, is available here.
NVIDIA Announces GeForce RTX 2070, 2080 and 2080 Ti - 08/20/2018 07:25 PM
It has been a long time coming, moments ago NVIDIA has announced the GeForce RTX 2080 and 2080 Ti. During its Gamescom 2018 event the CEO of Nvidia took the stage and announced the two Turing pr...
NVIDIA Announces Financial Results for Second Quarter Fiscal 2019 - 08/17/2018 11:07 AM
NVIDIA today reported revenue for the second quarter ended July 29, 2018, of $3.12 billion, up 40 percent from $2.23 billion a year earlier, and down 3 percent from $3.21 billion in the previous quart...
Nvidia announces Turing architecture for gpu's Quadro RTX8000, 6000, 5000 - 08/14/2018 08:17 AM
Nvidia has made a number of announcements over at SIGGRAPH, one of them. Turing architecture for GPUs. In addition, the company has announced a number of graphics cards, not intended for the consumer...
NVIDIA AIB Manli registers GA104-400 - Ampere? And Lists GeForce GTX 2070 and 2080 - 08/02/2018 05:03 PM
Holy moley, we're in for some gossip and chatter. Manli Technology Group, yes from the graphics cards, has a EEC certification upcoming products. And if you read closely you'll notice the mention ...
NVIDIA adds far more visual option to not install GeForce Experience - 07/16/2018 07:13 PM
Do you remember how Geforce Experience first was a feature, then an optional install and then pretty much became a mandatory installation toed to cloud logins like Facebook etc? Well, there always has...
Senior Member
Posts: 2168
Joined: 2011-01-05
They are probably talking about parallel or OpenCL type operations and not x86 code. In those cases, GPUs are vastly superior for highly paralleled compute code.
Senior Member
Posts: 11809
Joined: 2012-07-20
Once more, read slowly...
What are they comparing that GPU to? Till we know that, their entire statement is irrelevant.
Senior Member
Posts: 2168
Joined: 2011-01-05
I don't want to argue with you but its right there in their statement. We can guess what data centers use so many processors, CPU or GPU streams for: AI data processing...
"AI data center platform that delivers the industry’s most advanced inference acceleration for voice, video, image and recommendation services."
It says on a cloud with queries, lets say AI derived web searchs or photo retouching or map generation or scientific calculations, those are highly paralleled tasks and as per my statement, GPUs in general, regardless of any vendors are usually better than x86 CISC CPUs, multicore, multithread or not. Hell, better than RISC CPUs as well due to pure scaling of stream "processors".
Google is listed as a test case for trial runs... No need to be hostile about a discussion.
Senior Member
Posts: 11809
Joined: 2012-07-20
I don't want to argue with you but its right there in their statement. We can guess what data centers use so many processors, CPU or GPU streams for: data processing...
It says on a cloud with queries, lets say search engine lookups, those are highly paralleled tasks and as per my statement, GPUs in general, regardless of any vendors are usually better than x86 CISC CPUs, multicore, multithread or not. Hell, better than RISC CPUs as well due to pure scaling of stream "processors".
Google is listed as a test case for trial runs... No need to be hostile about a discussion.
CPU in my cellphone is up to 200times faster than PC CPU. It is true, but moment you get what CPU I compared it to...
yes, this statement I made is baseless because it makes comparison to "smoke" same way nVidia did.
If I was to guess what is relevant comparison in case no contestants in comparison are shown, it would be 1CU in GPU vs 1CU in CPU. But I am sure that they did compare quite a few SMs (CUs) to something like intel's 4C/8T low power server chip.
In best case scenario (as close to relevant as possible) they took some 75W server CPU and compared it to their 75W GPU in some very specific workload where that GPU excels and CPUs generally suck.
Senior Member
Posts: 11809
Joined: 2012-07-20
Marketing:
"Tesla T4 GPUs, which process queries up to 40x faster than CPUs alone."
Did they compare it to iP3@800MHz? Random C2Q? Or 28C/56T chilled core chip?
But I hope everyone sees how much they fit in... Someone should start getting info about sizes of each unit. I bet quite few people here would rather have GPU with 1/2 of SPs RTX2080Ti has and double tensor/RT cores. (That's if they can be decoupled.)