An interesting story on news.com today. I'll do the blunt copy/paste here as it's really interesting to see this article in it's entire original form. Advanced Micro Devices' ATI graphics chip unit doesn't want to build "huge" chips like rival Nvidia, an executive says.
But an Nvidia exec says smaller isn't always better or more efficient. Such statements will help define how the two chip giants do battle at the high end of the graphics chip market in the coming years.
One of the largest graphics chips yet will be Nvidia's upcoming high-end GTX 280. This is the kind of chip that high-end gaming enthusiasts crave. But great performance often means a large transistor count. And the GTX 280 is expected to have both.
AMD, of course, also intends to deliver extreme graphics technology with its upcoming X2, a follow-on to the current 3870 X2 series. And AMD wants to be clear: its strategy is fundamentally different than Nvidia's.
"We took two chips and put it on one board (X2). By doing that we have a smaller chip that is much more power efficient," said Matt Skynner, vice president of marketing for the graphics products group at AMD.
"We believe this is a much stronger strategy than going for a huge, monolithic chip that is very expensive and eats a lot of power and really can only be used for a small portion of the market," he said. "Scaling that large chip down into the performance segment doesn't make sense--because of the power and because of the size."
Skynner said that AMD tries to design GPUs (graphics processing units) for the mainstream segment of the market, then ratchet up performance by adding GPUs rather than designing one large, very-high-performance chip.
Nvidia's "strategy is to design for the highest performance at all cost. And we believe designing for the sweet spot and then leveraging for the extreme enthusiast market with multiple GPUs is the preferred approach," Skynner said.
This applies to memory too. AMD thinks support for technologies like GDDR5 memory is another way to deliver good performance at a reasonable cost. "You don't need a huge chip with a huge data path to get the bandwidth. You can utilize a technology like GDDR5 to get that bandwidth," Skynner said.
Nvidia tends to favor very-fast, single-chip solutions. Nvidia, of course, has a different take on why it chooses to develop big, fast chips.
"If you take two chips and put them together, you then have to add a bridge chip that allows the two chips to talk to each other...And you can't gang the memory together," said Ujesh Desai, general manager for GeForce products at Nvidia.
"So when you add it all up, you now have the power of two GPUs, the power of the bridge chip, and the power that all of that additional memory consumes. That's why it's too simplistic of an argument to say that two smaller chips is always more efficient."
Desai takes this argument a bit further. "They don't have the money to invest in high-end GPUs anymore. At the high end, there is no prize for second place. If you're going to invest a half-billion dollars--which is what it takes to develop a new enthusiast-level GPU--you have to know you're going to win. You either do it to win, or you don't invest the money."