RAM (pronounced ramm) is an acronym for random access memory, a type of computer memory that can be accessed randomly; that is, any byte of memory can be accessed without touching the preceding bytes. RAM is the most common type of memory found in computers and other devices, such as printers.

Types of RAM

There are two main types of RAM:

The two types of RAM differ in the technology they use to hold data, with DRAM being the more common type. In terms of speed, SRAM is faster. DRAM needs to be refreshed thousands of times per second while SRAM does not need to be refreshed, which is what makes it faster than DRAM.

DRAM supports access times of about 60 nanoseconds, SRAM can give access times as low as 10 nanoseconds. Despite SRAM being faster, it’s not as commonly used as DRAM because it’s more expensive. Both types of RAM are volatile, meaning that they lose their contents when the power is turned off.

Types of Speed

The next issue to consider is speed. RAM speeds can be quite confusing, as they can be expressed in several ways. Starting with the oldest DDR modules, the basic models run at an internal frequency of 100MHz, while more advanced modules increase the internal clock speed to 133MHz, 166MHz and up to 200MHz.

It might seem logical to refer to these different modules by their internal speeds but, thanks to the double data rate that gives DDR its name, a 100MHz module can carry out a theoretical maximum of 200 million transfers per second, while the 200MHz module can carry out 400 million transfers per second. For this reason, 100MHz DDR is known as DDR-200, 133MHz modules are labelled DDR-266 and so forth.

This is a fairly obvious system, but RAM transfers aren’t very convenient units to work in. It’s much more common to talk about data in terms of bytes. So to make DIMM speeds more easily understandable, they’re also given a “PC-rating”, which expresses their bandwidth in megabytes per second.

PC ratings can be calculated very simply. Each RAM transfer consists of a 64-bit word, or eight bytes. So to convert transfers-per-second into bytes-per-second, you simply multiply by eight. DDR-200 is thus equivalent to PC-1600.

DDR2 uses almost the same naming conventions, but the chips communicate with the CPU at twice the speed of DDR. The slowest DDR2 is therefore capable of 400 million transfers per second, and is designated DDR2-400, or PC2-3200. As you’d expect, DDR2 goes up to DDR2-800, also known as PC2-6400, and above this there’s a high-end part, based on 266MHz chips, to give DDR2-1066. Its PC-rating is rounded down to PC2-8500 for convenience – its peak bandwidth is more like 8,533MB/sec.

DDR3 extends this process, running the I/O bus at four times the speeds of DDR – so the basic part can handle 800 million transfers per second, earning the labels DDR3-800 and PC3-6400, with faster chips being named accordingly.

The maximum standard RAM speeds approved by JEDEC – the body behind the three DDR standards – are DDR-400, DDR2-1066 and DDR3-1600. You may also hear of modules with higher speed ratings, such as DDR2-1250 and DDR3-2000, designed to run at overclocked speeds in enthusiast motherboards.

RAM cache


A portion of RAM used to speed up access to data on a disk. The RAM can be part of the disk drive itself (sometimes called a hard disk cache or buffer) or it can be general-purpose RAM in the computer that is reserved for use by the disk drive (sometimes called a soft disk cache). Hard disk cachesare more effective, but they are also much more expensive, and therefore smaller. Nearly all modern disk drives include a small amount of internal cache.

A soft disk cache works by storing the most recently accessed data in the RAM cache. When a program needs to access new data, the operating system first checks to see if the data is in the cache before reading it from the disk. Because computers can access data from RAM much faster than from a disk, disk caching can significantly increase performance. Many cache systemsalso attempt to predict what data will be requested next so they can place that data in the cache ahead of time.

Although caching improves performance, there is some risk involved. If the computer crashes (due to a power failure, for example), the system may not have time to copy the cache back to the disk. In this case, whatever changes you made to the data will be lost. Usually, however, the cache system updates the disk frequently so that even if you lose some data, it will not be much. Caches that work in this manner are called write-back caches. Another type of disk cache, called a write-thru cache, removes the risk of losing data because it only caches data for read operations; write operations are always sent directly to the disk.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s