GeForce GTX 280 review
Posted by Hilbert Hagedoorn on: 06/15/2008 01:00 PM [ 0 comment(s) ]
The quick 101 - Shaders explained
Dude .. what is a shader?
To understand what is going on inside that graphics card of yours, please allow me to explain what is actually happening inside that graphics processor and explain shaders in very easy to understand terminology (I hope). That and how it relates to rendering all that gaming goodness on your screen (the short version).
What do we need to render a three dimensional object; 2D on your monitor? We start off by building some sort of structure that has a surface, and that surface is being built from triangles. Triangles are great as they are really quick and easy to compute. Now we need to processes each triangle. Each triangle has to be transformed according to its relative position and orientation to the viewer.
The next step is to light the triangle by taking the transformed vertices and applying a lighting calculation for every light defined in the scene. At last the triangle needs to be projected to the screen in order to rasterize it. During rasterization the triangle will be shaded and textured.
Graphic processors like the GeForce series are able to perform a large sum of these tasks. Actually the first generation (say ten years ago) was able to draw shaded and textured triangles in hardware, which was a revolution.
The CPU still had the burden of feeding the graphics processor with transformed and lit vertices, triangle gradients for shading and texturing, etc. Integrating the triangle setup into the chip logic was the next step and finally even transformation and lighting (TnL) was possible in hardware, reducing the CPU load considerably (surely everyone remembers the GeForce 256 right ?).
The big disadvantage at that time was that a game programmer had no direct (i.e. program driven) control over transformation, lighting and pixel rendering because all the calculation models were fixed on the chip. This is the point in time where shader design surfaced.
We now finally get to the stage where we can explain Shaders.
In the year 2000 DirectX 8 was released, Vertex and Pixel shaders arrived and allowed software and game developers to program tailored transformation and lighting calculations as well as pixel coloring functionality which gave a new graphics dimension towards the gaming experience, and things started to look much more realistic.
Each shader is basically nothing more than a relatively small program (programming code) executed on the graphics processor to control either vertex, pixel or geometry processing. So a shader unit in fact small floating point processor inside your GPU.
When we advance to the year 2002, we see the release of DirectX 9. DX9 had the advantage for the use of way longer shader programs than before, with pixel and vertex shader version 2.0.
In the past graphics processors have had dedicated units for diverse types of operations in the rendering pipeline, such as vertex processing and pixel shading.
Last year with the introduction of DirectX 10 it was time to move away from the pretty inefficient fixed pipeline and create a new unified architecture.
So each time we mention a shader processor, this is one of the many shader processors inside your GPU. Once I mention a shader ... that's the program executed on the Shader engine (the accumulated shader processor domain).
NVIDIA likes to call these stream processors. Same idea, slightly different context. GPUs are stream processors processors that can operate in parallel, aka many independent vertices and fragments at once. A stream is simply a set of records that require similar computation.
Okay, we now have several types of shaders, Pixel, Vertex, Geometry and with the coming of DX11 likely a compute shader, which is a pretty cool one. As you can run physics acceleration without needing to go through a graphics API. Fun thing, this is already possible through through the CUDA engine, but we'll talk about that later on.
There. I do hope you now understand the concept of shaders and shader processors. Let's talk a little about the GeForce series 200, shall we?
In this article we review the Gigabyte GeForce GTX 650 Ti Boost OC WindForce 2X with that OC for a factory tweak and the Windforce indicating a silent yet powerful two fan cooling solution. The product is customized with a new PCB, cooling and a few tweaks, it has 2GB of memory with both that memory and the core base-clock slightly overclocked. An tasty product at an interesting price in the lower segment of the mainstream market.
ASUS GeForce GTX 670 DirectCU Mini review
In this article we review the ASUS GeForce GTX 670 DirectCU Mini edition, a compact performance graphics card designed primarily for small form factor PCs with mini ITX motherboards. The dual-slot card measures just 17cm and features the NVIDIA GTX 670 GPU. ASUS has re-engineered the DirectCU cooler to fit small form factor cases. While shorter, it introduces a copper vapor chamber placed directly on top of the GPU for faster heat spreading and dispersal with 20% lower temperatures than reference GTX 670.
MSI GeForce GTX 650 Ti BOOST OC review
In this article we review the MSI GeForce GTX 650 Ti BOOST OC edition review with that OC for a factory tweak. The product is customized with a new PCB, cooling and a few tweaks, it has 2GB of memory with both that memory and the core base-clock slightly overclocked. Overall an interesting product at an interesting price in the lower segment of the mainstream market.
EVGA GeForce GTX 650 Ti Boost SC edition review
In this article we review the EVGA GeForce GTX 650 Ti Boost SC edition review with that SC for superclocked. The product is fairly reference looking but does come with EVGA's own styled cooler, it has 2GB of memory with both that memory and the core baseclock slightly overclocked quite significant.