An introduction to FCAT benchmarking

Graphics cards 1048 Page 2 of 9 Published by

teaser

Current Benchmark Methods

So before we begin, we need to walk through some of the current benchmark methods.

The Average Framerate

The traditional measurement for a game running on the graphics card is FPS, the number of frames rendered and measured with each second that passes. This number relates directly towards performance your graphics solution is capable of.

In a normal setting a framerate of 35 and higher is considered normal. The measurement works because of two things, it relates towards how easily we can understand something as simple as a number. See the 3Dmark series of benchmarks isn't that popular because of the immense cool graphics, contrary, the success is based upon the fact it outputs one simple and easy-to-understand number. This relates best towards FPS measurements, the FPS up-to 30 is bad, in-between 30 and 45 okay, and 45 FPS and higher is just great. The good thing about this measurement (opposed to say 3DMark) is that the number is directly related towards you state of mind as to how fast a graphics card can render.

But FPS can be interpreted in many ways, and the average FPS doesn't necessary relate to what you actually see on screen with anomalies and stutters in mind. Here at Guru3D we show you the average FPS, but there's also the low bottom line FPS and high FPS that people are interested in. But again we need the vast majority of people to understand the numbers shown in our benchmark sessions hence we stuck to average framerates. This is still and remains to be the leading measurement for performance.

Time Based Latency Measurements

Lately there has been a new measurement introduced, latency measurements. Basically it is the opposite of FPS.

  • FPS mostly measures performance, the number of frames rendered per passing second.
  • Frametime mostly measures and exposes anomalies - here we look at how long it takes to render one frame. Measure that chronologically and you can see anomalies like peaks and dips in a plotted chart, indicating something could be off. 

So when you take a number of seconds of a recording whilst tracking the number of frames per second, then output that in a graph and then zoom in, then you can see the turnaround time it takes to render one frame. Basically the time it takes to render one frame can be monitored, tagged and bagged. It's commonly described as latency. One frame can take say 17ms. A colleague website discovered a while ago that there where some latency discrepancies in-between NVIDIA and AMD graphics cards with results being more worse for AMD, for multi-GPU solutions.

We followed this development closely, made our own measurements and starting Catalyst 13.2 Beta we can confirm that AMD has been reducing the latency issues shown dramatically for single GPU graphics cards. NVIDIA products still look better in this test, but mind my wording... the measurements looks better. I find this test to be somewhat subjective, as every now and then there will remain little latency spikes which you can output in a chart but might never be seen or observed by yourself.

Quick example:

Hitman Absolution

Untitled-5
Above GeForce GTX 650 Ti Boost

Untitled-2
Above Radeon HD 7790

So above two example FRAPS Frametime recordings. You'll agree with me that when looking at the Radeon HD 7790 you'd expect a downright horrible game experience. The problem with FRAPS is this, it can show stuff that is not relevant - most of these weird spikes you see do not relate to what you see on-screen. So the interesting thing is that once you play and visually look at the games being rendered by say AMD, you (for the bigger part) just might not notice or see the latency spikes. So the big question with this measurement then is, how relevant and objective are the above FRAPS results?

Is Measuring Frame Time FPS With FRAPS Wrong?

Yes and no - it will remain an ongoing, difficult and endless discussion. Everything FRAPS records is relevant, FRAPS can not record what it can not measure, but herein lies a deeper problem... everything it measures just doesn't always show what you see on screen. So my most safe and diplomatic answer here is this: it depends on what what you are specifically measuring.

So traditionally we measure FPS with software tools like FRAPS or AfterBurner which poll the game render output and display that FPS in real-time on your OSD or output the result in a XML recording which we can then use as a spreadsheet and draw our data from. Interestingly enough, software based polling the game engine traditionally always has been a good way to measure FPS. With FRAPS measures at game engine time and very simply put, a lot of other stuff happens before the rendered information reaches your graphics card monitor output.

FRAPS can measure a near perfect experience plotted in a chart, yet on screen you can still see an anomoly every now and then that can not be traced back to that chart -- or vice versa as shown in the Hitman Absolution R7790 chart above. And therein lies the problem we like to tackle. The better way to measure frametime (or what i like to call Frame experience) would be at the graphic card output. So ask yourself, where do the rendered frames end up? That's right your monitor. Thus if we sit one step in front of that, then measuring at the DVI, HDMI or DP connection would be ideal. And that is what FCAT can help us with. We place a device in-between the graphics card output and the monitor input. Wouldn't that be a better way to measure FPS? Well, yes, I do think that this method could be a hint more reliable. 

Share this content
Twitter Facebook Reddit WhatsApp Email Print