LIFTING THE LID ON DDR5
The latest RAM technology has hit the mainstream, but what dif ference does it make? Darien Graham-Smith decodes the data to find out
IF YOU BUY a new computer today, it may well be using DDR5 memory. As the name suggests, this represents the fifth generation of the DDR (double data rate) RAM standard, originally laid down in the year 2000. Of course, if you want to delve into the history of memory you can go much further back than that—the Joint Electron Device Engineering Council (JEDEC) has been developing standards for electronic storage since 1944.
For now, though, let’s focus on how DDR5 builds on its predecessor. DDR4 is a versatile standard that officially comes in seven varieties, dubbed DDR4-1600, 1866, 2133, 2400, 2666, 2933, and 3200. You might assume that the numbers indicate the speed at which the different variants run, and that’s basically right—although since DDR transfers data at twice the speed of the internal clock, a DDR-1600 module actually runs at 800MHz, and so forth.
Take a look at DDR5 options and you’ll see that they start at DDR5-4800 and go all the way up to DDR5-7200. That’s an enormous increase, with the top frequencies more than doubling—but that doesn’t necessarily mean that DDR5 RAM will run twice as quickly.
DDR5 memory is now common in many modern computers.
THE SECRET OF LATENCY
The chips in a RAM module are fast, but they aren’t capable of reading or writing a full 64-bit chunk of data every clock cycle. If you look at the specifications of modern DIMMs, you’ll see that each one has a specified set of timings that indicate how many clock cycles it actually takes to store and fetch data. These are normally written as a series of four numbers, say 16-20-20-34.
To make sense of these numbers, you need to know that RAM is arranged as a virtual grid: the latter three numbers tell you how many clock cycles it takes to select a specific ‘row’ of memory for reading, and the first one tells you how many cycles it will then take to retrieve data from the desired ‘column’.
This initial value is called CAS latency, which stands for ‘column address strobe’ and, along with the clock frequency, it’s normally the most significant factor in determining the relative speed of a RAM module. That’s because most RAM operations involve reading a series of values from consecutive columns within a row, so the overhead of row selection comes up relatively infrequently, whereas CAS latency applies to every value that’s accessed. Therefore, a DDR4-3200 module with the timings above would be able to provide one chunk of data every 16 clock cycles; since the clock runs at 1.6GHz (double data rate, remember), that means it takes 10ns for the RAM module to complete a request for data from a specified location in an active row.