So then, over the past half year there has been a new measurement introduced, latency measurements. Basically it is the opposite of FPS.
FPS mostly measures performance, the number of frames rendered per passing second.
Frametime mostly measures and exposes anomalies - here we look at how long it takes to render one frame. Measure that chronologically and you can see anomalies like peaks and dips in a plotted chart, indicating something could be off. It is a far more precise indicator.
So when you take a number of seconds of a recording whilst tracking the number of frames per second, then output that in a graph and then zoom in, then you can see the turnaround time it takes to render one frame. Basically the time it takes to render one frame can be monitored, tagged and bagged. It's commonly described as latency. One frame can take say 17ms. A colleague website discovered a while ago that there where some latency discrepancies in-between NVIDIA and AMD graphics cards with results being more worse for AMD, for multi-GPU solutions. We followed this development closely.
So What Is The Problem?
First off, microstuttering and related anomalies apply only to multi-GPU setups. Single Radeon HD based cards did not show what you are about to see. Let me show you (everything below is based on the OLD drivers and this DO NOT HAVE frame pacing corrected):
Above Hitman Absolution, 30 seconds measured with FCAT with I think was Catalyst 13.4. Notice how incredible clean that looks? You are looking at latency, lower is better and anything above roughly 40ms would be considered slowish or stuttering. This is the perfect picture you want to see really.
FCAT Multi-GPU Results
So was there a 2nd reason why FCAT frametime measurements took over technology editors like a storm, it could actually show significant anomalies. See, FRAPS had shown weird anomalies with AMD Crossfire configurations that are not in line with what you expect to see. But this is what happened when we fired up FCAT again with a Radeon HD 6990 (dual-GPU).
Uh-oh! Above you see the Radeon HD 6990 (multi-GPU), 2560x1440 and basically what happens is that with each frame that is rendered the next frame the latency drops and then the next goes up again. So basically high-low-high-low-high-low-high-low.
Let me show you a few lines of the data:
Data point 0 - 13.063 Data point 1 - 20.372 Data point 2 - 9.369 Data point 3 - 20.372 Data point 4 - 9.809 Data point 5 - 21.273 Data point 6 - 8.750 Data point 7 - 24.234 Data point 8 - 8.896 Data point 9 - 24.640
And so on, the data-point is frame-number and the result is displayed in milliseconds.
The odd / even results can be described as microstutter, it takes one frame to be rendered a lot faster then the other. But with the data-set at hand, we can now zoom in as well. Below 100 rendered frames. The scale to the left, 40.000 Micro seconds = 40ms:
Fact remains that the turnaround time for a frame is low in ms and that makes this very hard to see and observe on the monitor, hence why a lot of people never are bothered by it. If the FPS is fast enough, it really isn't an issue. I mean look at the chart above and there's not one frame passing 45ms. But you need to wonder, what the heck is happening there? Well it goes like this:
Basically multi GPU solutions apply AFR (Alternate Frame Rendering), two GPUs alternate in processing (rendering) a frame. AMD claims to have the GPU as render as fast as possible, NVIDIA keeps things smooth by syncing maybe even delaying frames. This last technique is called frame pacing.
The image above shows how that would be rendered (and pardon my l33t drawing skillzz). That would explain the phenomenon shown above. Let's quickly look at what NVIDIA does. Have a look below.
By syncing Nvidia is likely delaying one frame with a mathematical calculation to compensate the offset you are seeing with AMD. As a result, above a chart done with a GeForce GTX 690, same title, same benchmark and same resolution. Really fine scores with very tiny latency changes showing very smooth stuff. But allow me to zoom in at the exact same point as we did with the R6990, again 100 frames this time with our axis scaled and fixed:
Above the GeForce GTX 690 now showing the very same frame sequence at the same spot we just showed the AMD result, same resolution well... same everything. That really is just a massive difference. But with data-sets we can take it a step further and combine AMD's old driver, and NVIDIA managing framepacing:
And there it is, 100 frames at the same measuring point. Now you can see microstuttering measured and outputted in a chart. And remember, FCAT displays in a chart what you see on screen, keep that in mind.
With this dilemma exposed the AMD driver team was forced to work on the issue, as now it's out there in the open, this information would definitely hurt sales as the competition does not have this issue.
With the release of Catalyst 13.8 Beta now comes the first public implementation for AMD's framepacing algorithm. And in this article we're going to show you what tremendous steps AMD has been able to make in order to solve the problem at hand.