Page 7
Overclocking & TweakingBefore we dive into an wide-ranging series of tests and benchmarks, we need to explain overclocking. With most videocards, we can do some easy tricks to boost the overall performance a little. You can do this at two levels, namely tweaking by enabling registry or BIOS hacks, or even tamper with Image Quality. And then there is overclocking, which by far will give you the best possible results.
What do we need?
One of the best tool for overclocking NVIDIA and ATI videocards is our own Rivatuner that you can download here. If you own a NVIDIA graphics card then NVIDIA actually has very nice built in options for you that can be found in the display driver properties. They are hidden though and you'll need to enable it by installing a small registry hack called CoolBits, which you can download right here (after downloading and unpacking just click the .reg file twice and confirm the import).
Where should we go ?
Overclocking: by increasing the frequency of the videocard's memory and GPU, we can make the videocard increase its calculation clock cycles per second. It sounds hard but it really can be done in less then a few minutes. I always tend to recommend to novice users and beginners not to increase the frequency any higher then 5-10% of the core and memory clock. Example: If your card would run at 300 MHz then I suggest you don't increase the frequency any higher than 330 MHz.
More advanced users push the frequency often way higher. Usually when your 3D graphics will start to show artifacts such as white dots ("snow"), you should go down 10-15 MHz and leave it at that.
The core can be somewhat different. Usually when you are overclocking too hard, it'll start to show artifacts, empty polygons or it will even freeze. I recommend that you back down at least 15 MHz from the moment you notice an artifact. Look carefully and observe well.
All in all... do it at your own risk.
Overclocking your card too far or constantly to its maximum limit might damage your card and it's usually not covered by your warranty.
You will benefit from overclocking the most with a product that is limited or you may call it "tuned down." We know that this graphics core is often limited by tact frequency or bandwidth limitation, therefore by increasing the memory and core frequency we should be able to witness some higher performance results. A simple trick to get some more bang for your buck.
The GeForce 6600 GT at default clock speeds is doing 500 MHz. Its DDR memory is (2x)500, thus 1000 MHz. Overclocked it was capable of running at 550 MHz core and 1200 MHz memory frequency.
That high 525+ core frequency is something we observed with all 6600 cards that we have tested to this date, it's just amazing. The non GT's that are running at ~300 MHz at default remain the best overclockers though, as you can gain a 200+ MHz boost. The GT is already pushed close towards its theoretical maximum clock frequency.
But to be able to use that high core clock efficiently you need more memory bandwidth and memory-wise the overclock was very good. Overall a nice overclock that will boost the framerates a little higher. Take a good look at the numbers in the benchmarks as you'll see a very nice difference when we enable the overclock.
One small reminder though, our overclocking results are never a guarantee for your results. Manufacturers' choices in components differ and so will the end-results. This however is a good indication of what is possible (or not).
We used the new WHQL 66.93 for this test. The default clock setting you can alter them by using Rivatuner, which you can download here or CoolBits, which you can download right here.
The Test System
Now we begin the benchmark portion of this article, but first let me show you our test system.
- Albatron PX915P/G Pro (PCI-Express 16x enabled)
- 1024 MB DDR400
- GeForce 6600/6600GT/6800GT/Radeon X600
- Pentium 4 class 3.6 Ghz (Socket 775)
- Windows XP Professional
- DirectX 9.0c
- ForceWare 66.93 WHQL
- Radeon Catalyst 4.10 for ATI cards
- Latest reference chipset and AGP/ PCI-Express drivers
- RivaTuner 2.0 (tweak utility)
Benchmark Software Suite:
- Far Cry Guru3D config & timedemo
- Splinter Cell (Guru3D custom timedemo)
- Half-life 2 (Guru3D custom timedemo)
- 3DMark03
Remark
Image Quality between ATI and NVIDIA cards really is about equal, yet driver optimizations have made it very hard to do a 100% 1:1 performance comparison. ATI has enabled Trilinear optimizations in their X800 series at default, so we enabled that option for the GeForce Series 6 also.
The Anisotropic Filtering settings that enables themselves in the ForceWare drivers when you enable AF/AA settings have been disabled by us unless noted otherwise to make the benchmarks as objective as they can be for future comparisons.
All tests where made in 32 bit per pixel color in resolutions ranging from 800x600 pixels up to the Godfather of all gaming resolutions: 1600x1200 We also ran all tests with 4X Antialiasing and 8X Anisotropic Filtering where possible.
The numbers (FPS = Frames Per Second) | ||||||||||||||
|
||||||||||||||
|