Page 8 - Media Playback & Overclocking
Media PlaybackA much forgotten item in reviews nowadays is of course multimedia/movie playback. In an old review, I believe it was GeForce3, I stated that NVIDIA should take a good look at ATI when talking about dual monitor setups and movie playback through the S-Video connector. Well, NVIDIA did that and more even since GeForce4.
Seriously, movie playback over video-output has improved heaps over previous generations. I can start stories about what and why this has changed, the new driver 'tweaks' bob and weave interlacing etc ... But basically it's all way too much blah blah and well, this review is already a tad longer than ours normally, also way too long reviews anyway (seriously you are not even half way through the review yet, did you grab a cup of coffee already?).
Images taken with GeForce FX from the movie: Red Dragon - JPEG compression makes it a bit fuzzy. These are actually DiVX screenshots.
Basically, all you need to know is that playback is now truly at a level that can compare equally with ATI's Radeon playback. DVD Playback quality was good and performance wise handled at videocard level, as CPU utilization during playback is very low.
Performance & Overclocking
Before we dive into an extensive series of tests, we need to discuss overclocking. With most videocards, we can do some easy tricks to boost the overall performance a little. It's called overclocking the videocard, and by increasing the frequency of the videocards memory and gpu we can make the videocard increase it's calculation clock cycles per second. It sounds hard but it really can be done in less then a few minutes. I always tend to recommend to novice users and beginners not to increase the frequency any higher then 5-10% of the core and memory clock. Example: If your card would run at 300 MHz then I suggest you don't increase the frequency any higher than 330 MHz.
More advanced users push the frequency often way higher. Usually when memory starts to show white dots 'snow' you should go down 10 MHz and leave it at that. The core can be somewhat different. Usually when you are overclocking too hard, it'll start to show artifacts, empty polygons or it will even freeze. I recommend that you back down at least 15 MHz from the moment you notice an artifact. Look carefully and observe well.
All in all .. do it at your own risk. Overclocking your card too fast or constantly to its limit might damage your card and it's not covered by your warranty.
You will benefit from overclocking the most with a product that is limited or you may called it 'tuned down'. We know that this graphics core is often limited by tact frequency or bandwidth limitation, therefore by increasing the memory and core frequency we should be able to witness some higher performance results. A simple trick to get some more bang for your bucks.
The GeForce FX 5950 UV from Albatron, at standard 256-bit 256 MB DDR memory runs at default 475 MHz for the core and 950 (475x2) MHz for it's memory. The 5950 UV was a reasonable overclocker with 515 MHz as core frequency and again 1.03 GHz for it's memory.
Overclocking a card like the 5950 Ultra is a pretty hard task as NVIDIA has a built-in SAFE feature. If the core reaches a certain temperature threshold it will automatically lower the clock speed to prevent the GPU from overheating. It's a nice feature yet will make overclocking a tad harder to do.
These settings have been used throughout our entire benchmark suite. That means that the card has been tested in the overclocked conditions in multiple individual tests without corruption or weirdness in the form of system hangs.
Test system |
Benchmark Software Suite:
* For four of these games we are making use of a custom time demo. Neither NVIDIA nor ATI knows what time demo we are using. These are non-public tests which where recorded for us only. We are not going to make them public either as they are and will remain internal material. Therefore the chipset manufacturer will not have the chance of optimizing it that specific benchmark time-demo. We will do our very best now and in the future to keep a close eye on optimizations and cheats, we need to be able to show you objective results. However in the end this should be a responsibility for the chipset designer, if that entity fails to do so, then it'll lose consumer's trust and will dig it's own grave. |