GeForce FX 5700 Ultra & 5950 Ultra Review

Graphics cards 1049 Page 12 of 23 Published by


Page 12 - The Overclocking prelude

Performance & Overclocking
Before we dive into a large series of benchmarks we need to discuss overclocking. With most videocards, we can do some easy tricks to boost the overall performance a little. It's called overclocking the videocard, and by increasing the frequency of the videocards memory and gpu we can make the videocard increase it's calculation clock cycles per second. It sounds hard but it really can be done in less then a few minutes. I always tend to recommend to novice users and beginners not to increase that frequency any higher then 5-10% of the core and memory clock. Example: If your card would run at 300 MHz then I suggest you don't increase that frequency any higher than 330 MHz.

More advanced users push that frequency often way higher. Usually when memory starts to show white dots 'snow' you should go down 10 MHz and leave it at that. The core can be somewhat different. Usually when you are clocking to hard it'll start to show artifacts, empty polygons or it will even freeze. I recommend that you back down at least 15 MHz from the moment you notice an artifact. Look carefully and observe well.

All in all .. do it at your own risk. Overclocking your card too fast or constantly to it's limit might damage your card and it's not covered by your warranty.

You will benefit from overclocking the most with a product that is limited or you may called it 'tuned down'. We know that this graphics core is often limited by tact frequency or bandwidth limitation, therefore by increasing the memory and core frequency we should be able to witness some higher performance results. A simply trick to get some more banh for your bucks.

Both reference cards we tested today overclock pretty well, results of course differ here and there and likely on any website a little. First off the DDR1 based 256-bit GeForce FX 5950 Ultra. It's default settings are 475 MHz for both the core and memory (memory x2). The card successfully overclocked towards 525 MHz on the core and 1.02 GHz on it's memory.

Then there is the GeForce FX 5700 Ultra, at standard 128-bit 128 MB DDR2 memory it defaults at again 475 MHz for the core and 450MHz (x2) for it's memory. The 5700 was a sweet overclocker with 541 MHz as core frequency and again 1.02 GHz for the memory.

These settings have been used throughout our entire benchmark suite. That means that each card has been tested on the overclocked conditions in twenty-eight individual tests without corruption or weirdness in the form of system hangs.

Both cards are consistent in all benchmarks and that's 56 single test-runs.

 Test system

All tests were made in 32 bit colors in resolutions ranging from 800x600 pixels up-to the Godfather of all gaming resolutions, the 1600x1200 several performance/quality settings.

Editorial note on Cheats and OptimizationsA nasty trend we have seen the past months is that certain graphics chipset designers have been caught optimizing and even cheating their drivers for specific benchmarks. We want to explain a few things about the difference between the two and what has been doing to prevent this grey area of cheating.

An Optimization by itself is honestly nothing bad. Take a racing car. At default it will race at a certain speed, now if we tune it, revise it and hey even switch on that Nitro button, the performance of that racing car will increase heaps. Your Operating system has been optimized for your Pentium or AMD processor to take full advantage of the CPU, the result is a faster computer.

Now we look at the graphics chipset, the two examples named above are similar for graphic processors. Games can be optimized for the graphics chipset to take advantage image quality or performance wise the frame rate. We make a very important side note though, an optimization can not and may not be made at the cost of image quality as that would be cheating.

Cheating, by definition is wrong on any level. NVIDIA has been caught red-handed clipping in specific synthetic benchmarks. A downright shameful act ...

After huge criticism from the public and a lot of mud-throwing between FutureMark and NVIDIA, NVIDIA has made steps to remove their cheating actions from their drivers which we highly recommend them to do as otherwise it would cost them their reputation and good name at consumer level. If a consumer does and can not trust a product, they will not buy it. It's that simple.

However, despite many rumors you're heard and flames you've read NVIDIA's products are far from bad, in fact I still believe the FX line-up is a very good series yet being caught cheating clouded a very negative spiral/reputation around their product line-up and I predict that their cheats have inflicted serious damage in sales of the entire Geforce FX series.

That being said, it's reasonable easy to divert specific cheats and unprofessional optimizations. First things first, we stepped away from synthetic benchmarks like 3D Mark 03, it was heavily 'abused'.

In today's review we will use several benchmarks based on games.

For four of these games we are making use of a custom time demo. Neither NVIDIA or ATI knows what time demo we are using. These are non-public tests which where recorded for us only. We are not going to make them public either as they are and will remain internal material. Therefore the chipset manufacturer will not have the chance of optimizing it that specific benchmark time-demo.

The downside, a manufacturer can make game-engine specific cheats that lower image quality for example, so it's not 100% fool-proof. There is one piece of software that we know of who has this issue Unreal Tournament 2003 as it does not allow to be rendered at specific filtering while ATI's products do and although with the naked eye you could not tell the difference it most definitely is a bad trend.

As you can see we will do our very best now and in the future to keep a close eye on optimizations and cheats, we need to be able to show you objective results. However in the end this should be a responsibility for the chipset designer, if that entity fails to do so, then it'll lose consumer's trust and will dig it's own grave.

That being said, let's get started with the benchmark.

Share this content
Twitter Facebook Reddit WhatsApp Email Print