GeForce FX 5800 Ultra - The review

Graphics cards 1048 Page 3 of 24 Published by

teaser

The Technology #2

Technology
It's interesting to learn that major motion picture studios apply 128-bit precision to create rich, realistic computer-generated scenes. By matching these film industry levels of precision, the NVIDIA GeForce FX GPUs enable high-quality images with spectacular cinematic effects, without any artifacts or compromises in quality, and the real-time application of those effects throughout the entire scene.

The keywords for GeForce FX are:

  • 0,13 micron GPU fabrication
  • 125 million transistors (give or take a few, I didn't count)
  • DDR2 memory clocked at 1 GHz
  • 51 billion floating point operations per second (51 gigaflops) in the pixel shader alone
  • Advanced Programmability (3rd generation)
  • High-precision color (64-bit & 128-bit color)
  • High-level Shading Language
  • New Vertex and Pixel Shading instructions
  • Highly efficient architecture (3rd generation Lightspeed Memory Architecture)
  • High Bandwidth to memory and CPU
  • Shaders can be 1000's of instructions long.
  • 8 pixels per clockcycle rendering power
  • 200 Million Triangles per second
  • 64-bit & 128-bit color, this is film-quality precision, in fact a higher precision than the movie Toy Story 2 used. 64-bit offers high precision with 2x the performance & half the memory of 128-bit. As it seems developers want both 64 and 128-bit color precision for advanced effects.
  • AGP 8x (over 2GB/sec bandwidth to the system).
  • Fully DirectX9 compatible
  • Pixel Shaders 2.0+
  • 0.13 Micron fabrication process.

The document we received a while ago clearly states that DX9 entails a strong shift from bandwidth towards computation.  Basically the new bottleneck seems to be computing efficiency over memory efficiency. As stated above the NV30 GPU has 3rd generation LMA, NVIDIA states that entails a 1.0 GHz memory data rate but an internal 48GB/sec effective bandwidth due to LMA III, that remains speculative though.

image16.jpgDawn technology demo ..

 

Vertex and Pixel Shaders
The engine that drives the GeForce FX is called the CineFX engine, the NVIDIA GeForce FX GPU shifts the focus from simple pixel fill rate to sophisticated pixel shading. Shader programming has been advanced with a lot of new new capabilities, and the hardware builds in many features to accelerate both pixel and vertex shader execution. Many programming barriers previously associated with shaders have been removed this way.

  • Branches / loops allowed in PS/VS programs.
  • VERY large PS/VS instruction count (which allowes very complicated algorithms)
  • Full FP precision

The GeForce FX core supports long programs for even the most elaborate effects, and conditional branching capabilities for better program flow. Take a look at the differences:

Untitled-5.jpg

A comparison between current- and new-generation platform capabilities

A Higher-level of Programming Support
Although I'm not a programmer I'm pretty confident that GeForce FX is extremely flexible for programmers in many ways. The NVIDIA CineFX engine implements complete versions of both OpenGL and DirectX 9.0 specifications. These APIs give programmers access to many new programming tools that speeds up development.

The DirectX 9.0 specification includes three major new features:

  • Pixel Shader 2.0. DirectX 9.0 exposes true programmability of the pixel shading engine. This makes procedural shading on a GPU possible for the first time.
  • Vertex Shader 2.0. DirectX 9.0 dramatically enhances the power of the previous DirectX vertex shader by increasing the length and flexibility of vertex programs.
  • High-precision, floating-point color. DirectX 9.0 breaks the mathematical precision barrier that has limited PC graphics in the past. Precision, and therefore visual quality, is increased with 128-bit floating-point color per pixel. In order to take advantage of these new features in DirectX 9.0, NVIDIA has developed the NVIDIA Cg Developers Toolkit. When combining the NVIDIA Cg Developer Toolkit with the NVIDIA GeForce FX GPU, developers have the ability to take full advantage of the API to develop stunning visual effects.

Studio-Quality Precision
The 16- and 32-bit floating point formats of the NVIDIA CineFX engine give developers the flexibility to create the highest-quality graphics. The 32-bit format offers the ultimate image quality, bringing full 128-bit precision processing to the entire graphics pipeline and delivering true 128-bit color in pixel shaders. The 16-bit format provides an optimal balance of image quality and performance. In fact, this format exactly matches the format and precision level used by the leading studios to produce todays feature films and special effects. Developers are free to move back and forth between these formats within a single shader program, using the format that is best suited to each particular computation.

image26.jpg
Dawn technology demo .. now that's precision ..

Share this content
Twitter Facebook Reddit WhatsApp Email Print