XFX GeForce 7800 GS Extreme Edition
Posted by Hilbert Hagedoorn on: 02/22/2006 08:00 AM [ 0 comment(s) ]
Inside the graphics processor
Allow me to talk swiftly about the graphics core of the GeForce 7800 GS.
The Codename: We always love them. The 7800 GS series (and at one point I really do expect an PCI-Express version) has been developed under the G70 family of GPU's, G70 who we all know and love as it's called the GeForce 7800 GTX.
NVIDIA_BR02.DEV_00F5.1 = "NVIDIA GeForce 7800 GS"
The core: So in fact the 7800 GS is simply a G70 core with disabled vertex and pixel units. The core has 302 million transistors and is based on a .11 micron fabrication process.
Since we are on the topic of the graphics core, inside it there are precisely six vertex units active, one more than I initially expected. The number of pixel pipelines are identical to the 6800 GT and Ultra model, there are sixteen of them. Let me enlighten briefly what happens in the pixel pipeline for you to understand its importance. Each pixel that is rendered on your screen goes through a pipe where it'll receive its complex color/effect etc. Each time that pixel is altered it'll pass through the pixel pipeline, one pass is one clock cycle. You can imagine going 16 pipes is nothing to be ashamed about. The Series 7 7800 GT and GTX actually have 24 of them. Last but not least, for the freaks, the 7800 GS AGP has eight ROPs. ROP is short for Raster Operation and a portion of a pipeline, responsible for AA, Blending and Z-Buffer compression. Simply stated a ROP is basically the output engine of a pixel shader pipeline. The pipeline is scalable, each pipe is available at any time in sets of 4, which we call quads.
Small note here: you can not enable disabled pipelines or vertex units with Rivatuner ! It's simply not possible.
The clockworks: the standard AGP version of the 7800 GS will be clocked at 375 MHz for the graphics processor and 2x600 on the memory. Memory is gDDR3 with a 38.4 GB/sec theoretical bandwidth. That's great memory bandwidth for sixteen pipeline product. XFX however as explained on the previous page already will clock the card much faster at default for you.
|Specs||GeForce 6600||GeForce 6600 GT||GeForce 6800||GeForce 6800 GS||GeForce 6800 GT||GeForce 6800 Ultra||GeForce 7800 GS||GeForce 7800 GT||GeForce 7800 GTX|
|Transistors||?||?||222 million||302 million||302 million||302 million|
|Process, GPU maker||110nm||110nm||130nm||110nm pcx
|Core clock||300 MHz||500 MHz||Up to 400 MHz||425 PCX
|350MHz||400MHz||375 MHz||400 MHz||430 MHz|
|Memory||128MB DDR1||128MB GDDR3||128MB DDR1||128/256MB gDDR3||256MB GDDR3|
|Memory clock||Up to manufacturer||2x500 MHz||2 x 325MHz||2x
|2 x 500MHz||2 x 600MHz||2 x 500MHz||2 x 600MHz|
|FP operations||FP16, FP32|
|Pixel shaders||Pixel Shaders 3.0|
|Vertex shaders||Vertex Shaders 3.0|
What is a shader ? What do we need to render a three dimensional object; 2D on your monitor? We start off by building some sort of structure that has a surface, that surface is being built from triangles and why triangles? They are quick to calculate. How's each triangle being processed? Each triangle has to be transformed according to its relative position and orientation to the viewer. Each of the three vertices the triangle is made up of is transformed to its proper view space position. The next step is to light the triangle by taking the transformed vertices and applying a lighting calculation for every light defined in the scene. At last the triangle needs to be projected to the screen in order to rasterize it. During rasterization the triangle will be shaded and textured.
Graphic processors like the GeForce series are able to perform a certain amount of these tasks. The first generation was able to draw shaded and textured triangles in hardware. The CPU still had the burden to feed the graphics processor with transformed and lit vertices, triangle gradients for shading and texturing, etc. Integrating the triangle setup into the chip logic was the next step and finally even transformation and lighting (TnL) was possible in hardware, reducing the CPU load considerably (GeForce 256). The big disadvantage was that a game programmer had no direct (i.e. program driven) control over transformation, lighting and pixel rendering because all the calculation models were fixed on the chip. And now we finally get to the stage where we can explain Shaders. Vertex and Pixel shaders allow developers to code customized transformation and lighting calculations as well as pixel coloring functionality. Each shader is basically nothing more than a relatively small program executed on the graphics processor to control either vertex or pixel processing.
Now then, our usual blurb: What are the major advantages of the Series 6 and 7 products? Well, feature wise we are looking pretty much at the same technology we have known for 14-15 months now. What you need to remember is that any Series 6 and 7 graphics card can achieve what a modern game expects from it. Obviously the keywords over the past couple of years has been "Shader technology." It really changed the way we look at games from a graphical "Point of View". It allows the game programmers to take games to a next level in both a visual and performance terms.
As always, that's the point where we land and quickly discuss on Shader Model 3.
Talking about Shader Model 3
If you program or play computer games or even recently attempted to purchase a video card, then you will have no doubt heard the terms "Vertex Shader" and "Pixel Shader". The step from 2.0 to 3.0 was a small one and most Shader Model 2.0 games can easily be upgraded to Model 3.0, which can bring more performance to that gaming experience. DirectX 9 was recently updated and we are going to see more and more support for 3.0 Shaders.
Is SM 3.0 technology a huge visual advantage over 2.0? Nope, not even the slightest bit. Yet any technological advantage is always welcome and preferred over a previous generation's development. What you need to remember about Shaders 3.0 is that it can and will be used only in several critical places where it can give a performance boost and graphics cards are all about performance my friends. Both ATI and NVIDIA now offer Shader Model 3 support in their new products. GeForce Series 6 and newer models and for ATI their X1000 series and newer models.
Talking about HDR
Another big trendy implementation that will bring games closer to a movie like quality experience is HDR.
Both ATI and NVIDIAhave been focusing extremely hard on HDR. They put a lot of money into their technology to support HDR in the best possible way and they should as it just is a fantastic effect that brings so much more to the your gameplay experience. HDR is something you all know from games like Far Cry. It's extremely bright lighting that brings a really cool cinematic effect to gaming. This effect is becoming extraordinarily popular.
Valve recently released a new HL2 level in the form of Half Life 2: Lost Coast. Go download it as it'll show and amaze you what HDR can do. The difference is obvious. HDR means High Dynamic Range. HDR facilitates the use of color values way beyond the normal range of the color palette in an effort to produce a more extreme form of lighting rendering. Typically this trick is used to contrast really dark scenery. Extreme sunlight, over-saturation or over exposure is a good example of what exactly is possible. The most simple way to describe it would be controlling the amount of light used present in a certain position in a 3D scene.
HDR is already present in Far Cry, 3DMark06, Splinter Cell: Chaos Theory and in Half Life 2: Lost Coast. It will be available in Unreal 3 and likely a large number of other games. Let the screenshots do the talking.
A review on the XFX Black edition GeForce 9800 GTX, after reading some of you guys will probably will run to the store .. as the XFX XXX and Black editions obviously have fallen in price after AMDs Radeon 4850 launch.
XFX GeForce 9800 GX2 Black edition review
King of pre-overclocked products has to be XFX, their speed-bin of releasing OCed products is exceedingly fast. I do not think that there is an XFX product out there that is not available as a triple X edition or other series name slapped on top of it. Today will be no different, we received XFXes top of the shelf product, the GeForce 9800 GX2, the cool thing is that this is the new mystified named 'Black edition' series of cards, meaning it comes with a nice pre-overclock and an lovely game bundle.
XFX GeForce 9600 GT XXX 512MB review
A review on that XXX rated version of the GeForce 9600 GT. Hey, only XFX can implement such a name for you. XFX have a "way" of communication and getting attention. They quite honestly deliver really impressive products. Today's tested product is no different.
XFX GeForce 8800 GTS 320 MB Fatal1ty review
The Fata1ty Geforce 8800 GTS 320MB graphics card comes with it's GPU core pre-overclocked at 650 MHz and it's 320 MB memory clocked at (2x) 999 MHz. If that isn't rather radical, the Shader domain inside that graphics processor is clocked at a lovely 1620 MHz. Now if that does mean jack to you, these are the reference specs a 320 model normally has: 500 MHz core, 2x 800 MHz memory and a shader domain clock of 1200 MHz. Can you already sniff the performance increase ? That's like bacon on your eggs man.