eVGA GeForce GTX 280 HC16 Hydro Copper review

Graphics cards 1048 Page 2 of 16 Published by

teaser

2 - GeForce GTX series 200

GeForce GTX series 200

It should not require any introduction, but the product we review today is the GeForce GTX 280. The GTX series 200 GPUs amounts up-to 1400 million transistors. It's the biggest chip that NVIDIA has ever built. The GeForce 8800 series for example had roughly 700 Million processors. So NVIDIA doubled up the previous transistor count. Interestingly enough, is also doubles up the die-size of the processor and so you'd expect NVIDIA to have moved to a smaller fabrication process for this graphics processor. They did not as the new architecture is still based on a 65nm fabrication size. The chip is being made at TSMC and according to them ... the biggest one they've ever made with a huge die measuring 24 x 24 mm. Not many chips will actually fit on a 300 mm wafer, since , resulting in a die area size of 576 mm2. We expect NVIDIA to move to a smaller fab process (55nm) pretty soon though.

  • 1.4 billion transistors
  • 993 GigaFLOP processing power
  • 240 processing (shader) cores (GTX 280)
  • 192 processing (shader) cores (GTX 260)
  • DirectX 10
  • New power management enhancements
  • CUDA parallel processing
  • GeForce PhysX

Where are all these transistors going? Obviously a big chunk of the transistors are being utilized for the shader cores. And shader cores the product surely has, 240 of them on the GeForce GTX 280. The new shader architecture have some cool new features over the last generation products. Sitting in-between a set of shader processors is an integration of local cache memory (16k software managed cache).

It is sitting in-between a block with 8 shader cores. So simply put, what helps here is that the data / instruction doesn't have to leave the GPU anymore to crunch it's data (normally in the regular framebuffer memory. This is a very significant improvement in the architecture. Each shader domain is clustered in three blocks of eight shader processors. Then there are ten clusters totaling up towards the 240 shader units for the GeForce GTX 280. And if you do the math with me real quick then the GeForce GTX 260 has to have 8 shader clusters with a total of 192 shader processors.

  • GeForce GTX 280: 602MHz GPU, 240 shaders, 1296MHz shader processors, 1107MHz memory, 1GB memory, 512-bit memory bus, 141.7GB/s memory bandwidth, 48.2 billion/s texture fill rate
  • GeForce GTX 260: 576MHz, 192 shaders, 1242MHz shader processors, 999MHz memory, 896MB memory, 448-bit memory bus, 111.9GB/s memory bandwidth, 36.9 billion/sec texture fill rate.

The reference GeForce GTX 280 has a pretty amazing 240 stream processors and runs at a core clock frequency of 602MHz. There are more clocked domains inside that GPU though, the shader processor run at 1296MHz and the memory is at 1107MHz (effective 2214 MHz). All-in-all we feel the clocks are a little bit on the conservative side. The GTX 280 has eight 64-bit memory controllers, 8x 64-bit = 512 Bit.

This high-end part has 1GB of GDDR3 memory which has a 512-bit memory bus that binds to eight memory controllers inside the GPU. At the end of the pipeline we run into an improved ROP (Raster Operation) engine, and the 280 has 30 of them.

Initially the pricing model was:

  • GeForce GTX 260 449 USD
  • GeForce GTX 280 649 USD

Due to the hefty competition, these prices dropped toward:

  • GeForce GTX 260 289 USD
  • GeForce GTX 280 449 USD

All that transistor madness results into roughly 933 GFLOPS of performance. A tad unexpected is to see that this card needs both a 6-pin and 8-pin power connector to get enough juice. NVIDIA claims a TDP (peak wattage) of roughly 235 Watts, which in all honesty is not even that bad considering the GeForce 8800 Ultra isn't that far off from that number either. The card features PCIe 2.0 interface and PhysX acceleration. Compute Unified Device Architecture (CUDA) is also supported, and its purpose is to communicate with the GPU and receive processed data. Multipurpose GPU is a reality, and as we previously tested, applications like Badaboom, which allows you to transcode video using the GPU.

Share this content
Twitter Facebook Reddit WhatsApp Email Print