HiS Radeon x800 XL IceQ II Turbo

Graphics cards 1047 Page 3 of 16 Published by

teaser

Page 3

Temporal AATemporal AA, it sounds like something from Star Trek, Ay Cap'n! Despite offering nothing tremendously new in terms of technology features the product was made faster compared to previous generations, we do see something to get excited about in terms of new technology. Ever since the release of the x800 series we have a new Antialiasing mode that was added in the drivers called Temporal AA. I must add that the entire R3x0 series is/was able to handle Temporal AA.

Antialiasing -> Simple explanation. This is to get rid of the "jaggies" on diagonal lines. Look at the picture below.

It boils down to this. To recover a signal, or image, you need a minimum of samples to be able to give a realistic representation of the image. The problems start with texture maps being either too close or too far away from the viewpoint. If the polygon is far away then you only have a limited number of points to show the texture map, so logically you have to drop a lot of the real pixels of your texture map.

Copyright Guru3D.com Antialiased line - If you draw a straight line (under an angle) using a paint program and you zoom in, you will discover that the line looks like a stairway.  To remove this and make the line look like a line points in different colors are added to the side of the line to make it look more like a real line. 

This creates some sort of interlace effect : one line is shown and one is not. This can result in weird patterns appearing, and makes the texture map look completely different from the real one. A similar problem is if the polygon is close to you. You need more info than there is available, resulting in the generation of random noise (meaningless values).

One of the best features compared to the competition is ATI's approach towards Antialiasing. In the name of performance ATI introduced Temporal (time) AA.

This is a feature that has been an option ever since the introduction of the R3xx processors, however it has never been used, this AA mode is to use different sample patterns. In simple wording, you sample the source at many consecutive points in time and then combine the samples for final output. This method basically is rendering the image at a much higher framerate than the final display, and sampling down to the fewer final rendered images. ATI's method could, perhaps, be better described as a multi-pass spatial AA method, jittered AA, or even temporal dithering. The advantage? Get this, 2x Temporal AA offers about the same quality as 4xAA while requiring only the performance of 2xAA.

The big downside of Temporal AA is that when framerates drop very low you'll notice an effect that is irritating, flickering. That means that most new games that demand a lot of the graphics core, or systems that are not powerful enough to cough up high framerates are not well suited for this AA technology at all. This will never be ATI's primary AA technology.

Unfortunately, due to the necessary V-Sync, this AA mode cannot be used to benchmarked correctly.

Click To Enlarge [Copyright 2004 Guru3D.com]

3Dc - Compression
Almost any... well any graphics card nowadays makes use of texture compression technology. It's been discussed here on more then one occasion, I'm sure you recognize terms like S3TC and DXTC. Basically you reduce the byte-size of a texture while maintaining the best quality as possible. However, compression equals artifacts and thus image degradation at some point. 3Dc is a compression technology designed to bring out fine details in games while minimizing memory usage. It's the first compression technique optimized to work with normal maps, which allow fine per-pixel control over how light reflects from a textured surface. With up to 4:1 compression possible, this means game designers can now include up to 4x the detail without changing the amount of graphics memory required and without impacting performance.

Let's look what ATI has to say and analyze this a bit.

3Dc is a new compression technology developed by ATI, and introduced in the new RADEON X800 series of Visual Processing Units. This technology is designed to allow game developers to pack more detail into real time 3D images than ever before, making 3Dc a key enabler of the HD Gaming vision.

Click To Enlarge [Copyright 2004 Guru3D.com]
A close-up of the Optico character from ATi's Double Cross demo showing the increase in fine detail made possible by 3Dc compression.

Todays graphics processors rely heavily on data and bandwidth compression techniques to reach ever increasing levels of performance and image quality. Rendering a scene in a modern 3D application requires many different kinds of data, all of which must compete for bandwidth and space in the local memory of the graphics card. For games, texture data tends to be the largest consumer of these precious resources, making it one of the most obvious targets for compression. A set of algorithms known as DXTC (DirectX Texture Compression) has been widely accepted as the industry standard for texture compression techniques.

Click To Enlarge [Copyright 2004 Guru3D.com]

Introduced in 1999, along with S3TC (its counterpart for the OpenGL API), DXTC has been supported by all new graphics hardware for the past few years, and has seen widespread adoption by game developers. It has proven particularly effective for compressing two-dimensional arrays of color data stored in RGB and RGBA formats. With the appearance of graphics hardware and APIs that support programmable pixel shaders in recent years, researchers and developers have come up with a variety of new uses for textures. A variety of material properties such as surface normals, shininess, roughness and transparency can now be stored in textures along with the traditional color information.

Well then, there are some negatives about using normal maps. One that is very easy to explain is the graphics processor's load, it will increase. Another negative is that a higher amount of data is required. The more details the developer wishes to include, the higher the resolution the normal map has to be - and the more memory bandwidth is needed. Therefore ATi developed 3Dc, which compresses normal maps up to 4:1 without any significant loss of quality. The new x800 range and upwards will incorporate this technology, whether it'll be included into DirectX remains a mystery. Developers can bypass this by applying some sort of add-on, just like we saw with Unreal when it started to support S3TC.

Click To Enlarge [Copyright 2004 Guru3D.com]

Anisotropic Filtering
So ATI has now revised Anisotropic Filtering a bit. The x850 supports up to 16 levels of Anisotropic Filtering (AF);
2, 4, 8 or 16 texture samples. The user can select bilinear filtering or trilinear filtering by selecting either performance or quality mode in the driver properties.

"SMOOTHVISION HD anisotropic filtering supports 2, 4, 8, or 16 texture samples per pixel. Each setting can be used in a performance mode that uses bilinear samples, or a quality mode that uses trilinear samples. There is also a new capability to support intermediate modes, to help strike the ideal balance between performance and quality."

anisotr.jpg
This image shows you how Anisotropic works versus what good it does for you. I always have my games at the highest possible AF settings, in this case 16x.

Share this content
Twitter Facebook Reddit WhatsApp Email Print