Core i7 975 review
Posted by Hilbert Hagedoorn on: 06/02/2009 01:00 PM [ 0 comment(s) ]
Quickpath interconnect (QPI)
The first Core i7s will sport both the memory controller and system I/O integrated onto the CPU die and therefore eliminates the Intel Front Side Bus (FSB) altogether. In place of the FSB, one or more high speed, point-to-point buses called Quick Path Interconnect (QPI) are used, formerly known as Common Serial Interconnect Bus or CSI. QPI features higher bandwidth than the traditional FSB and is better suited to system scaling. We quickly close one eye and hint you towards AMD. See, AMDs HyperTransport links are somewhat similar to what Intel is doing today. High-speed point-to-point inter-component or processor connectivity/communications. Intel was tied to the frontside bus that started to interfere with performance. The QPI architecture will allow Intel to connect tri- or even quad-channel memory directly to the the processors integrated memory controller.
Intel also added PCI Express links directly into the CPU die. This will for sure deliver much more bandwidth for high performance graphics cards and remove any bottleneck issues with other system components.
So you are asking how will that work in relation to say overclocking. Well, in fact it's still there, yet functioning in a different way. Imagine a single pumped FSB at 133 and apply a flexible multiplier to it, say with a range of 12 to 25. Depending on the workload the processor it will dynamically alter that multiplier. So in an idle state you'd see an effective 12x133 Mhz = 1596 MHz processor clock frequency. This is how core i7 achieves its clock.
On full load a Core i7 975 processor would jump to 133MHz x 25mp = 3.3 GHz. QPI is one of the new extendable building blocks of the Nehalem CPU architecture.
QPI uses up to 6400 MT/s (million transfers per second) links on the top-range products, and as shown today, on the 975 processor.
Triple channel memory controller
With the arrival of Core i7 we learned about on-CPU integrated memory controllers for DDR3 SDRAM with 1 to 3 -- 64-bit memory channels (physically four only three active though), a triple-channel memory controller. As such the total memory bus width goes up from 128-bits to 192-bits allowing a massive bandwidth increase as they are no longer tied to the FSB.
Intel eliminated those 'FSB brakes' by designing Nehalems architecture to use 64-bit memory controllers which are connected directly with the processors silicon. As a result this new design should bring a bandwidth utilization of as much as 90%, a nice jump from todays 50-60% utilization for sure. The new controller of course supports both registered (server market) and unregistered (consumer) memory DIMMs. The controller is fast... very fast, and supports DDR3-800, DDR3-1066, DDR3-1333 JEDEC standards, yet has room for future scalability. The memory controller is able to handle 64GB/s, a full tri-channel DDR3-1333 implementation will only amount to 32GB/s maximum bandwidth utilization. Do the math and conclude that even DDR3-2000 will not max out the controller.
We'll actually try out some DDR3 2133 in triple-channel configuration for this article.
So then, three memory channels per processor, each channel supports a maximum of 3 DIMMs. Again, do the math and a single processor can support a maximum of 9 memory slots. You are of course free to use one or two DIMMS, but for optimal performance, the minimum would however be three, one DIMM per channel. So depending on the motherboard class of use, the board can come configured with three, six or nine memory slots.
Here's a thought; servers are generally often 2-way SMP systems and with two Core i7 class Xeon processors, the total memory slots supported will double to 18 :) Overall as our benchmark results will show, the memory bandwidth created here with triple-channel is nothing short of amazing.
In this review we'll be using 2133 MHz OCZ Blade DDR3 memory. OCZ provided this kit specifically for this review. It is horribly sweet memory, yet will also be horribly expensive. And it comes at a risk... we did have some issues getting this frequency stable on the X58 motherboards. You might need to pass the 1.65 DIMM/QPI voltage for it to run properly. So it's definitely not for everyone. But we'll check that out later.
We review the Core i7 3770K Ivy bridge processors alongside Intel's Z77 motherboard. Will Ivy Bridge be the processor series everything you expected? Go find out in this extensive review here at Guru3D.
Core i7 3820 processor review
We review the Core i7 3820 processor. The chip features four computing cores with Hyper-Threading support working at 3.6GHz (3.9GHz max Turbo), 10MB of Level 3 cache memory, a quad-channel memory controller, and a built-in 40-lane PCI Express 3.0 controller.
Core i7 3960X processor and MSI X79A-GD65 review
Today an article covering the Intel Core i7-3960X (Sandy Bridge-E) and X79 based motherboards. An update to the true high-end six-core processor series aimed at consumers. we test with a final sample X79 motherboard from MSI. This article will also review the MSI X79A-GD65 8D. Next to that the fellas from G.Skill provided a Sandy-Bridge-E quad channel memory kit that blew us of our feet, 16GB G.Skill RipjawsZ series memory that with the flick of a BIOS setting to XMP runs stable at 2133 MHz in quad channel.
Core i5 2500K and Core i7 2600K review
Today we test and review Sandy Bridge, the Intel Core i7-2600K and Intel Core i5-2500K processors. We will pair the 2600K processor with the Intel Desktop Motherboard DP67BG and also run a test with the Intel Core i5-2500K processor on a Intel DH67BL motherboard