The first Core i7s will sport both the memory controller and system I/O integrated onto the CPU die and therefore eliminates the Intel Front Side Bus (FSB) altogether. In place of the FSB, one or more high speed, point-to-point buses called Quick Path Interconnect (QPI) are used, formerly known as Common Serial Interconnect Bus or CSI. QPI features higher bandwidth than the traditional FSB and is better suited to system scaling. We quickly close one eye and hint you towards AMD. See, AMDs HyperTransport links are somewhat similar to what Intel is doing today. High-speed point-to-point inter-component or processor connectivity/communications. Intel was tied to the frontside bus that started to interfere with performance. The QPI architecture will allow Intel to connect tri- or even quad-channel memory directly to the the processors integrated memory controller.
Intel also added PCI Express links directly into the CPU die. This will for sure deliver much more bandwidth for high performance graphics cards and remove any bottleneck issues with other system components.
So you are asking how will that work in relation to say overclocking. Well, in fact it's still there, yet functioning in a different way. Imagine a single pumped FSB at 133 and apply a flexible multiplier to it, say with a range of 12 to 25. Depending on the workload the processor it will dynamically alter that multiplier. So in an idle state you'd see an effective 12x133 Mhz = 1596 MHz processor clock frequency. This is how core i7 achieves its clock.
On full load a Core i7 975 processor would jump to 133MHz x 25mp = 3.3 GHz. QPI is one of the new extendable building blocks of the Nehalem CPU architecture.
QPI uses up to 6400 MT/s (million transfers per second) links on the top-range products, and as shown today, on the 975 processor.
Triple channel memory controller
With the arrival of Core i7 we learned about on-CPU integrated memory controllers for DDR3 SDRAM with 1 to 3 -- 64-bit memory channels (physically four only three active though), a triple-channel memory controller. As such the total memory bus width goes up from 128-bits to 192-bits allowing a massive bandwidth increase as they are no longer tied to the FSB.
Intel eliminated those 'FSB brakes' by designing Nehalems architecture to use 64-bit memory controllers which are connected directly with the processors silicon. As a result this new design should bring a bandwidth utilization of as much as 90%, a nice jump from todays 50-60% utilization for sure. The new controller of course supports both registered (server market) and unregistered (consumer) memory DIMMs. The controller is fast... very fast, and supports DDR3-800, DDR3-1066, DDR3-1333 JEDEC standards, yet has room for future scalability. The memory controller is able to handle 64GB/s, a full tri-channel DDR3-1333 implementation will only amount to 32GB/s maximum bandwidth utilization. Do the math and conclude that even DDR3-2000 will not max out the controller.
We'll actually try out some DDR3 2133 in triple-channel configuration for this article.
So then, three memory channels per processor, each channel supports a maximum of 3 DIMMs. Again, do the math and a single processor can support a maximum of 9 memory slots. You are of course free to use one or two DIMMS, but for optimal performance, the minimum would however be three, one DIMM per channel. So depending on the motherboard class of use, the board can come configured with three, six or nine memory slots.
Here's a thought; servers are generally often 2-way SMP systems and with two Core i7 class Xeon processors, the total memory slots supported will double to 18 :) Overall as our benchmark results will show, the memory bandwidth created here with triple-channel is nothing short of amazing.
In this review we'll be using 2133 MHz OCZ Blade DDR3 memory. OCZ provided this kit specifically for this review. It is horribly sweet memory, yet will also be horribly expensive. And it comes at a risk... we did have some issues getting this frequency stable on the X58 motherboards. You might need to pass the 1.65 DIMM/QPI voltage for it to run properly. So it's definitely not for everyone. But we'll check that out later.
Core i7 4790K Processor Review We review the Intel Core i7 4790K processor aka the Devils Canyon architecture from the Haswell refresh series. Join us as we look at the performance of this processor in a wide variety of benchmark, ...
Core i7 4790 processor review We review the Intel Core i7 4790 processor, the Haswell refresh processors are finally here. Join us as we look at the performance of this processor in a wide scope of benchmarks, will it be noticeably faster then say t...
Core i7 4960X processor review Today an article covering the Core i7 4960X (Ivy Bridge-E) on a X79 based motherboard. Intel's most high-end processors just got updated with a high-end six-core processor series aimed at consumers.
Core i7 4820K processor review In this review we test the new four-core Core i7 4820K Ivy Bridge-E processor. This is the only quad-core IBE processor that Intel will release. But it is unlocked and as such direct competition for the Core i7 4770K.