NVIDIA GeForce 8800 Ultra review -
Now then the generic "core" clock for the GTX is 575 MHz. The Ultra is at 612 MHz. Agreed that's seems a little low. But one of the most important dimensions of the GPU is the stream processors, which are clocked independently from the rest of the GPU. On the GTX they were clocked at 1350 MHz, now on the Ultra we see a 1500 MHz Stream processor or call it the shader domain clock frequency.
Size then, just like the GeForce 8800 GTX graphics card the Ultra is 27 CM long, you could say a well hung piece of hardware. It's been said and explained to me by quite a number of female counterparts (render targets as I like to call them) size does matter (Ed: So many jokes, so little time... ).
Due to the size, note that the power connectors are routed off the top edge of the graphics card instead of the end of the card, so there is no extra space required at the end of the graphics card for power cabling. But before purchasing please check if you can insert a 27 CM piece of hardware in that chassis.
Okay, so this is really all you need to know for now. It's a faster clocked respin product with the same power consumption and a new cooler. The result: 10-15% more performance.
Some generic facts:
- All NVIDIA GeForce 8800 GTX / Ultra and GeForce 8800 GTS-based graphics cards are HDCP capable.
- The GeForce 8 Series GPUs are not only the first shipping DirectX 10 GPUs, but they are also the reference GPUs for DirectX 10 API development and certification and are 100% DirectX 9 compatible.
- GeForce 8800 GPUs deliver full support for Shader Model 4.0.
- All graphics cards are being built by NVIDIAs contract manufacturer.
- All GeForce 8800 GPUs support NVIDIA SLI technology. rds
- The NVIDIA GeForce 8800 GTX has a 24 pixel per clock ROP. The GeForce 8800 GTS has a 20 pixel per clock ROP.
- GeForce 8800 GTX requires a minimum 450W or greater system power supply (with 12V current rating of 30A).
- GeForce 8800 GTS requires a minimum 400W or greater system power supply (with 12V current rating of 26A).
In the photo shoot we'll have a closer look at all three products and tell you a little about connectivity and also that memory mystery.
The Unified state of DirectX 10
We just had a brief chat about shader operations and the importance of it. What you also need to understand that the new microarchitecture of the the DX10 GPUs (Graphics Processing Unit) has been changed significantly.
Despite the fact that graphics cards are all about programmability and thus shaders these days you'll notice in today's product that we'll not be talking about pixel and vertex shaders much anymore. With the move to DirectX 10 we now have a new technology called Unified shader technology and graphics hardware will adapt to that model, it's very promising. DirectX 10 is scheduled to ship at the beginning of next year with the first public release version of Windows Vista. It will definitely change the way software developers make games for Windows and very likely benefit us gamers in terms of better gaming visuals and better overall performance.
The thing is, with DirectX 10 Microsoft has removed what we call the fixed function pipeline completely (what you guys know as 16 pixel pipelines, for example) and allowing it to make everything programmable. How does that relate to new architecture? Have a look.
The new architecture is all about programmability and thus shaders as we on the previous pages explained.
So DirectX 10 and its related new hardware products offer a good number of improvements. So much actually that it would require an article on its own. And since we are here to focus on NVIDIA's two new products we'll take a shortcut at this stage in the article. Discussed in our Guru3D forums I often have seen the presumption that DX10 is only a small improvement over DX9 Shader Model 3.0. Ehm yes and no. I say it's a huge step as a lot of constraints are removed for the software programmers. The new model is more simple, easy to adapt and allows heaps of programmability, which in the end means a stack of new features and eye candy in your games.
Whilst I will not go into detail about the big differences I simply would like to ask you to look at the chart below and draw your own conclusions. DX10 definitely is a large improvement, yet look at it as a good step up.
Here you can see how DirectX's Shader Models have evolved ever since DX8 Shader Model 1.
So I think what you need to understand is that DirectX 10 doesn't commence a colossal fundamental change in new capabilities; yet it brings expanded and new features into DirectX that will enable game developers to optimize games more thoroughly and thus deliver incrementally better visuals and better frame rates, which obviously is great.
How fast will it be adopted well, Microsoft is highlighting the DX10 API as God's gift to the gaming universe yet what they forget to mention is that all developers who support DX10 will have to continue supporting DirectX9 as well and thus maintain two versions of the rendering code in their engine as DXD10 is only available on Windows Vista and not XP, which is such a bitch as everybody refuses to buy Vista.
However, you can understand that from a game developer point of view it brings a considerable amount of additional workload to develop both standards.
Regardless of the immense marketing hype, DirectX 10 just is not extraordinarily different from DirectX 9, you'll mainly see good performance benefits due to more efficiency in the GPU rather than vastly prominent visual differences with obviously a good number of exceptions here and there. But hey DirectX is evolving into something better, more efficient and speedier. Which we need to create better visuals.
Last week we arrived at Sin City not only to cover CES but there was something else going on as well. In Las Vegas, NVIDIA had organized a briefing for a select group of the press. From Europe perhaps ten to fifteen people where invited for this somewhat privileged preview -- the topic, a technical overview of project Fermi. Fermi is of course the family name of the latest generation of GPUs from NVIDIA. The first chipset deriving from Fermi will be called the GF100 GPU which will likely be used on what we think will be called products like GeForce 360 and GeForce 380. Join us in a nice technology preview.
NVIDIA GeForce 3D Vision review
In this article we will test and review the NVIDIA GeForce 3D Vision stereo kit. NVIDIA teamed up with Samsung to optionally bundle 120 Hz LCD monitors with their all new 3D stereo shutter glasses technology. NVIDIA on their end got driver support up and going to a state where it's really good. Next to that, they redesigned the approach to the overall gaming experience. A set of shutter glasses that is wireless and rechargeable, games that are supported in the new drivers will automatically kick in 3D mode and next to that, NVIDIA really wanted a cool looking kit.
NVIDIA GF9300 (ECS GF9300TA) mainboard review
A test on the ECS GF9300T-A motherboard. Today NVIDIA is introducing their more budget conscious mainboard chipsets. The GF9300 and GF9400 based integrated graphics chipset motherboard products.
NVIDIA GeForce 8800 Ultra review
Today is the day that NVIDIA is launching it's GeForce 8800 Ultra. Now, NVIDIA tried to keep this product as secret as can be ... why ? Two reasons, to prevent technical specifications leaking onto the web. Secondly; obviously to change specs at the last minute. See ATI is releasing their R600 graphics card soon and the Ultra is the product that NVIDIA prepared to counteract in the market, an allergic reaction tothe R600 so to speak.