This week at SIGGRAPH, Nvidia has been giving a sneak peek at the GPU inside Project Logan, their next-generation, CUDA-capable mobile processor. From a graphics perspective they claim this is as big a milestone for mobile as the first GPU, GeForce 256. Project Logan’s GPU is based on Kepler architecture, which forms the foundation for products that a year ago began rolling out across our notebook, desktop, workstation and supercomputer lines.
The mission with Project Logan was to scale this technology down to the mobile power envelope – creating new configurations that we could both deploy in the Logan mobile SOC and license to others, as announced last month.
We took Kepler’s efficient processing cores and added a new low-power inter-unit interconnect and extensive new optimizations, both specifically for mobile. With this design, mobile Kepler uses less than one-third the power of GPUs in leading tablets, such as the iPad 4, while performing the same rendering. And it gives us enormous performance and clocking headroom to scale up.
We achieved this efficiency without compromising graphics capability. Kepler supports the full spectrum of OpenGL – including the just-announced OpenGL 4.4 full-featured graphics specification and the OpenGL ES 3.0 embedded standard. It also supports DirectX 11, Microsoft’s latest graphics API.
New Rendering, Simulation Techniques
These advanced APIs will allow developers to use more efficient, visually compelling rendering approaches than were previously possible in mobile. They will bring amazing images to life through a variety of advanced rendering and simulation techniques, such as:
- Tessellation – which creates geometry dynamically and efficiently on the GPU from high-level descriptions, sizing triangles optimally based on the user’s viewpoint. By comparison, fine detail in a traditional pre-generated approach is inefficient, requiring excess geometry to deal with all possible viewpoints.
- Compute-based deferred rendering – which calculates the effect of all lights in a scene in a single deferred rendering pass. This OpenGL 4 capability greatly improves deferred rendering efficiency and scalability compared to current OpenGL ES-based implementations, which require an extra pass for each light source in the scene. The scalability of the compute-based approach also paves the way to even more advanced lighting models, such as using virtual points of lights to approximate global illumination effects.
- Advanced anti-aliasing and post-processing algorithms – which deliver better image quality, particularly in areas of very sharp color contrast, by making multi-sampling more programmable and allowing applications to implement their own anti-aliasing filters. These also enable more efficient film-quality post-processing effects, such as motion blur and depth of field.
- Physics and simulations – which simulate the physical behavior of rendered objects, such as calculating rigid-body dynamics or animating particles of smoke. This enables gamers to enjoy more detailed, fully interactive virtual worlds not previously possible on mobile devices.
Leveraging Kepler’s heritage as the industry-leading architecture for general purpose GPU computing, we will also bring groundbreaking compute capability and power efficiency to new mobile applications outside of graphics. Among these are computational imaging, computer vision, augmented reality and speech recognition.
You can get a sense of Kepler’s capabilities in this video of our demo of “Ira,” a startlingly realistic digital model of a human head generated in real time. Our CEO, Jen-Hsun Huang, introduced Ira earlier this year on the stage of our GPU Technology Conference. At the time, it was running on a desktop PC equipped with an NVIDIA GeForce GTX Titan GPU, the most powerful single-GPU gaming processor on the market. In this demo, it’s running on the Kepler GPU inside Logan.
Logan has only been back in our labs for a few weeks and it has been amazing to see new applications coming up every day that have never been seen before in mobile. But this is only the beginning. Simply put, Logan will advance the capability of mobile graphics by over seven years, delivering a fully state-of-the-art feature set combined with awesome performance and power efficiency.
Below, a video showing Logan running our “Island” demo.
Head and skin data in “Ira” video courtesy of the Institute for Creative Technologies at USC.