GeForce FX 5800 Ultra review -
The Technology #2
It's interesting to learn that major motion picture studios apply 128-bit precision to create rich, realistic computer-generated scenes. By matching these film industry levels of precision, the NVIDIA GeForce FX GPUs enable high-quality images with spectacular cinematic effects, without any artifacts or compromises in quality, and the real-time application of those effects throughout the entire scene.
The keywords for GeForce FX are:
- 0,13 micron GPU fabrication
- 125 million transistors (give or take a few, I didn't count)
- DDR2 memory clocked at 1 GHz
- 51 billion floating point operations per second (51 gigaflops) in the pixel shader alone
- Advanced Programmability (3rd generation)
- High-precision color (64-bit & 128-bit color)
- High-level Shading Language
- New Vertex and Pixel Shading instructions
- Highly efficient architecture (3rd generation Lightspeed Memory Architecture)
- High Bandwidth to memory and CPU
- Shaders can be 1000's of instructions long.
- 8 pixels per clockcycle rendering power
- 200 Million Triangles per second
- 64-bit & 128-bit color, this is film-quality precision, in fact a higher precision than the movie Toy Story 2 used. 64-bit offers high precision with 2x the performance & half the memory of 128-bit. As it seems developers want both 64 and 128-bit color precision for advanced effects.
- AGP 8x (over 2GB/sec bandwidth to the system).
- Fully DirectX9 compatible
- Pixel Shaders 2.0
- 0.13 Micron fabrication process.
The received document we received back then clearly stated that DX9 entails a strong shift from bandwidth towards computation. Basically the new bottleneck seems to be computing efficiency over memory efficiency. As stated above the NV30 GPU has 3rd generation LMA, NVIDIA states that entails a 1.0 GHz memory data rate but an internal 48GB/sec effective bandwidth due to LMA III, that remains speculative though.
Dawn technology demo ..
Vertex and Pixel Shaders
The engine that drives the GeForce FX is called the CineFX engine, the NVIDIA GeForce FX GPU shifts the focus from simple pixel fill rate to sophisticated pixel shading. Shader programming has been advanced with a lot of new new capabilities, and the hardware builds in many features to accelerate both pixel and vertex shader execution. Many programming barriers previously associated with shaders have been eliminated this way. The GeForce FX core supports long programs for even the most elaborate effects, and conditional branching capabilities for better program flow. Take a look at the differences:
A comparison between current- and new-generation platform capabilities
A Higher-level of Programming Support
Although I'm not a programmer I'm pretty confident that GeForce FX is extremely flexible for programmers in many ways. The NVIDIA CineFX engine implements complete versions of both OpenGL and DirectX 9.0 specifications. These APIs give programmers access to many new programming tools that speeds up development.
The DirectX 9.0 specification includes three major new features:
- Pixel Shader 2.0. DirectX 9.0 exposes true programmability of the pixel shading engine. This makes procedural shading on a GPU possible for the first time.
- Vertex Shader 2.0. DirectX 9.0 dramatically enhances the power of the previous DirectX vertex shader by increasing the length and flexibility of vertex programs.
- High-precision, floating-point color. DirectX 9.0 breaks the mathematical precision barrier that has limited PC graphics in the past. Precision, and therefore visual quality, is increased with 128-bit floating-point color per pixel. In order to take advantage of these new features in DirectX 9.0, NVIDIA has developed the NVIDIA Cg Developers Toolkit. When combining the NVIDIA Cg Developer Toolkit with the NVIDIA GeForce FX GPU, developers have the ability to take full advantage of the API to develop stunning visual effects.
Studio-Quality Precision
The 16- and 32-bit floating point formats of the NVIDIA CineFX engine give developers the flexibility to create the highest-quality graphics. The 32-bit format offers the ultimate image quality, bringing full 128-bit precision processing to the entire graphics pipeline and delivering true 128-bit color in pixel shaders. The 16-bit format provides an optimal balance of image quality and performance. In fact, this format exactly matches the format and precision level used by the leading studios to produce todays feature films and special effects. Developers are free to move back and forth between these formats within a single shader program, using the format that is best suited to each particular computation.
Dawn technology demo .. now that's precision ..
In this review we take a ghost for a spin, Gainward has introduced their Gainward GeForce RTX 4060 Ti GHOST. Based on a close to MSRP product it comes with 8GB of graphics memory and a 399 USD price t...
ASUS GeForce RTX 4060 Ti TUF Gaming review
ASUS joins the GeForce RTX 4060 Ti release and submitted their Gaming TUF model. The 8GB VRAM-based card looks fierce and tuff with some significant cooling real estate. However, the question remains:...
MSI GeForce RTX 4060 Ti Gaming X TRIO review
MSI unveils the latest addition to their lineup, the MSI GeForce RTX 4060 Ti Gaming X TRIO. This graphics card boasts an impressive 8GB of VRAM and is equipped with a substantial cooling system and fa...
GeForce RTX 4060 Ti 8GB (FE) review
Our review of the GeForce RTX 4060 Ti Founders Edition showcases its performance, making it a proper contender within the mainstream 1080P range. With the added advantages of DLSS3 and Frame generatio...