Okay, allow me to explain a little what you will run into with memory timings. First off latency. We used the word numerous times already in this article. Latency is the time between when a request is made and the request is answered. I.E, if you are in a restaurant for a meal, the latency would be the time between when you ordered your meal to the time you received it. The faster your order is served, the better right?
Therefore, in memory terms, it is the total time required before data (your meal) can be written to or read from the memory. Latency - lower is better.
Say we notice on the packaging is this: CL9-11-10-30 1.5V (2T) for a memory kit. What do the numbers mean? Well this refers to CAS-tRCD-tRP-tRAS CMD (respectively) and these values are measured in clock cycles.
CAS Latency Undoubtedly, one of the most essential timings is that of the CAS Latency and is also the one most people can actually understand. Since data is often accessed sequentially (same row), the CPU only needs to select the next column in the row to get the next piece of data. In other words, CAS Latency is the delay between the CAS signal and the availability of valid data on the data pins (DQ). Therefore, the latency between column accesses (CAS), plays an important role in the performance of the memory. The lower the latency, the better the performance. However, the memory modules must be capable of supporting low latency settings.
tRCD There is a delay from when a row is activated to when the cell (or column) is activated via the CAS signal and data can be written to or read from a memory cell. This delay is called tRCD. When memory is accessed sequentially, the row is already active and tRCD will not have much impact. However, if memory is not accessed in a linear fashion, the current active row must be deactivated and then a new row selected/activated. It is this example where low tRCD's can improve performance. However, like any other memory timing, putting this too low for the module can result in instability.
tRP tRP is the time required to terminate one one Row access and begin the next row access. Another way to look at this it that tRP is the delay required between deactivating the current row and selecting the next row. Therefore, in conjunction with tRCD, the time required (or clock cycles required) to switch banks (or rows) and select the next cell for either reading, writing or refreshing is a combination of tRP and tRCD.
tRAS Memory architecture is like a spreadsheet with row upon row and column upon column with each row being 1 bank. In order for the CPU to access memory, it must first determine which Row or Bank in the memory that is to be accessed and activate that row via the RAS signal. Once activated, the row can be accessed over and over until the data is exhausted. This is why tRAS has little effect on overall system performance but could impact system stability if set incorrectly.
Command Rate The Command Rate is the time needed between the chip select signal and the when commands can be issued to the RAM module IC. Typically, these are either 1 clock or 2.
Memory testing is a process of trial and error, find and seek the maximum. This is pretty much a sucker for your free time.
Traditional system: If you are going to overclock then increase the system bus frequency, change the memory timings, but most of all alter memory dividers until your system won't boot. If you are not comfortable with such a thing, hey this isn't your game then. I recommend you to lower the processor's multiplier and then slightly increase the FSB with high memory timings and take it from there timings wise. For a Core i5/i7 system: change memory multipliers/dividers in the BIOS or overclock Baseclock, QPI frequency and memory voltage.
64-bit versus 32-bit OS and memory
Going for 4 GB or more? Then go with a 64-bit operating system please
Windows 98, who didn't use that OS? What amount of memory did your PC have? Right, likely 128 MB. We now test a system that has 64 times more memory (!)
Over the years we progressed and noticed that applications have gotten more and more memory intensive. With Windows XP we moved towards 512 MB as standard to prevent the OS from swapping to the HDD, and as explained on the previous page with the latest games we see that the certain games really like 1 GB. All this has happened over just a couple of years.
When Microsoft launched Windows Vista, the biggest memory hog in the world. 1 GB was just be bare minimum recommended specification. They actually recommend 2 GB with Windows 7 at minimum. And then there's 64-bit platforms supporting more than 4 GB memory.
You use Vista or Windows 7 32-bit. You'll only see 3 GB RAM!
Can you use 4, 6 or more GB of memory? Yes and no. As far as Windows 32-bitoperating systems are concerned, the world ends at 4,096 megabytes. That's it. As an example I'll use a 4GB kit here. Say you get 4GB, it will run just fine, yet with for example Windows 7/Vista 32-bit your memory size will be limited and you'll only have 2.9~3.2 GB out of the 4 GB available to you.
To address 4GB of memory you need 32 bits out of the address bus. There however is a problem - actually a similar problem that IBM faced when designing the original PC. You tend to want to have more than just memory in a computer - you need things like graphics cards and hard disks to be accessible to the computer in order for it to be able to use them. Microsoft call this MMIO (Memory-Mapped I/O).
If you have a video card that has 256 MB of onboard memory, that memory must be mapped within the first 4 GB of address space. If 4 GB of system memory is already installed, part of that address space must be reserved by the graphics memory mapping. Graphics memory mapping overwrites a part of the system memory. These conditions reduce the total amount of system memory that is available to the operating system.
So just as the original PC had to carve up the 8086's 1MB addressing range into memory (640K) and "other" (384K), the same problem exists today if you want to fit memory and devices into a 32-bit address range: not all of the available 4GB of address space can be given over to memory.
For a long time this wasn't a problem, because there was a whole 4GB of address space, so devices typically lurk up in the top 1GB of physical address space, leaving the bottom 3GB for memory. And 3GB should be enough for anyone, right?
So what actually happens if you go out and buy 4GB of memory for your PC? Well, it's just like the DOS days - there's a hole in your memory map for the IO. (Now it's only 25% of the total address space, but it's still a big hole.)
So the bottom 3GB of your memory will be available, but there's a discrepancy with that last GB. If you want it all, go with a 64-bit OS. In 64-bit Windows, the limit is gone.
Anyway, let's throw the modules in some tests.
Conclusion, if you want to utilize more than 3GB of memory... then make sure you have a 64-bit edition of Windows installed, which we believe anno 2012 everybody should use.
Corsair H75 review In this review we test the Corsair H75 liquid cooler. The H75 features a 120 mm radiator that is a good 25mm but also was applied with two really silent low RPM fans, so you add this kit in a push-p...
Corsair H105 review We test and review the Corsair H105 liquid cooler. The H105 features a 240 mm radiator that is thicker then normal, with a 38 mm thickness. As comparison the H100 uses a 27mm tick radiator....
Corsair Obsidian 250D review We review a new chassis from Corsair, it is the Obsidian 250D. A small mITX form factor ready chassis that will house the smallest, but also the biggest stuff inside your OC. Not mid, not full, mini and this...
Corsair RM650 Watt Gold PSU review We review the Corsair RM650 GOLD power supply. Yes, the silent and a bit mainstream RM650 is now Gold certified, that means it's 90% efficient at 50% load. Efficiency matters. The PSU itself is partly modular, for most of you with a side panel window in their chassis a must really as you'll want modular cables. The new PSU also improved on the audibility front. Have a peek at the article.