Friday, August 3, 2007

DIFFERENT TYPES OF RAM

Different RAM Types and its uses
Intro
The type of RAM doesn't matter nearly as much as how much of it you've got, but using plain old SDRAM memory today will slow you down. There are three main types of RAM: SDRAM, DDR and Rambus DRAM.
SDRAM (Synchronous DRAM)
Almost all systems used to ship with 3.3 volt, 168-pin SDRAM DIMMs. SDRAM is not an extension of older EDO DRAM but a new type of DRAM altogether. SDRAM started out running at 66 MHz, while older fast page mode DRAM and EDO max out at 50 MHz. SDRAM is able to scale to 133 MHz (PC133) officially, and unofficially up to 180MHz or higher. As processors get faster, new generations of memory such as DDR and RDRAM are required to get proper performance.
DDR (Double Data Rate SDRAM)
DDR basically doubles the rate of data transfer of standard SDRAM by transferring data on the up and down tick of a clock cycle. DDR memory operating at 333MHz actually operates at 166MHz * 2 (aka PC333 / PC2700) or 133MHz*2 (PC266 / PC2100). DDR is a 2.5 volt technology that uses 184 pins in its DIMMs. It is incompatible with SDRAM physically, but uses a similar parallel bus, making it easier to implement than RDRAM, which is a different technology.
Rambus DRAM (RDRAM)
Despite it's higher price, Intel has given RDRAM it's blessing for the consumer market, and it will be the sole choice of memory for Intel's Pentium 4. RDRAM is a serial memory technology that arrived in three flavors, PC600, PC700, and PC800. PC800 RDRAM has double the maximum throughput of old PC100 SDRAM, but a higher latency. RDRAM designs with multiple channels, such as those in Pentium 4 motherboards, are currently at the top of the heap in memory throughput, especially when paired with PC1066 RDRAM memory.

DIMMs vs. RIMMs
DRAM comes in two major form factors: DIMMs and RIMMS.
DIMMs are 64-bit components, but if used in a motherboard with a dual-channel configuration (like with an Nvidia nForce chipset) you must pair them to get maximum performance. So far there aren't many DDR chipset that use dual-channels. Typically, if you want to add 512 MB of DIMM memory to your machine, you just pop in a 512 MB DIMM if you've got an available slot. DIMMs for SDRAM and DDR are different, and not physically compatible. SDRAM DIMMs have 168-pins and run at 3.3 volts, while DDR DIMMs have 184-pins and run at 2.5 volts.
RIMMs use only a 16-bit interface but run at higher speeds than DDR. To get maximum performance, Intel RDRAM chipsets require the use of RIMMs in pairs over a dual-channel 32-bit interface. You have to plan more when upgrading and purchasing RDRAM.






From the top: SIMM, DIMM and SODIMM memory modules
Memory Speed
SDRAM initially shipped at a speed of 66MHz. As memory buses got faster, it was pumped up to 100MHz, and then 133MHz. The speed grades are referred to as PC66 (unofficially), PC100 and PC133 SDRAM respectively. Some manufacturers are shipping a PC150 speed grade. However, this is an unofficial speed rating, and of little use unless you plan to overclock your system.
DDR comes in PC1600, PC2100, PC2700 and PC3200 DIMMs. A PC1600 DIMM is made up of PC200 DDR chips, while a PC2100 DIMM is made up of PC266 chips. PC2700 uses PC333 DDR chips and PC3200 uses PC400 chips that haven't gained widespread support. Go for PC2700 DDR. It is about the cost of PC2100 memory and will give you better performance.
RDRAM comes in PC600, PC700, PC800 and PC1066 speeds. Go for PC1066 RDRAM if you can find it. If you can't, PC800 RDRAM is widely available.
CAS Latency
SDRAM comes with latency ratings or "CAS (Column Address Strobe) latency" ratings. Standard PC100 / PC133 SDRAM comes in CAS 2 or CAS 3 speed ratings. The lower latency of CAS 2 memory will give you more performance. It also costs a bit more, but it's worth it.
DDR memory comes in CAS 2 and CAS 2.5 ratings, with CAS 2 costing more and performing better.
RDRAM has no CAS latency ratings, but may eventually come in 32 and 4 bank forms with 32-bank RDRAM costing more and performing better. For now, it's all 32-bank RDRAM.
Understanding Cache
Cache Memory is fast memory that serves as a buffer between the processor and main memory. The cache holds data that was recently used by the processor and saves a trip all the way back to slower main memory. The memory structure of PCs is often thought of as just main memory, but it's really a five or six level structure:
The first two levels of memory are contained in the processor itself, consisting of the processor's small internal memory, or registers, and L1 cache, which is the first level of cache, usually contained in the processor.
The third level of memory is the L2 cache, usually contained on the motherboard. However, the Celeron chip from Intel actually contains 128K of L2 cache within the form factor of the chip. More and more chip makers are planning to put this cache on board the processor itself. The benefit is that it will then run at the same speed as the processor, and cost less to put on the chip than to set up a bus and logic externally from the processor.
The fourth level, is being referred to as L3 cache. This cache used to be the L2 cache on the motherboard, but now that some processors include L1 and L2 cache on the chip, it becomes L3 cache. Usually, it runs slower than the processor, but faster than main memory.
The fifth level (or fourth if you have no "L3 cache") of memory is the main memory itself.
The sixth level is a piece of the hard disk used by the Operating System, usually called virtual memory. Most operating systems use this when they run out of main memory, but some use it in other ways as well.
This six-tiered structure is designed to efficiently speed data to the processor when it needs it, and also to allow the operating system to function when levels of main memory are low. You might ask, "Why is all this necessary?" The answer is cost. If there were one type of super-fast, super-cheap memory, it could theoretically satisfy the needs of this entire memory architecture. This will probably never happen since you don't need very much cache memory to drastically improve performance, and there will always be a faster, more expensive alternative to the current form of main memory.
Memory Redundancy
One important aspect to consider in memory is what level of redundancy you want. There are a few different levels of redundancy available in memory. Depending on your motherboard, it may support all or some of these types of memory:
The cheapest and most prevalent level of redundancy is non-parity memory. When you have non-parity memory in your machine and it encounters a memory error, the operating system will have no way of knowing and will most likely crash, but could corrupt data as well with no way of telling the OS. This is the most common type of memory, and unless specified, that's what you're getting. It works fine for most applications, but I wouldn't run life support systems on it.
The second level of redundancy is parity memory (also called true parity). Parity memory has extra chips that act as parity chips. Thus, the chip will be able to detect when a memory error has occurred and signal the operating system. You'll probably still crash, but at least you'll know why.
The third level of redundancy is ECC (Error Checking and Correcting). This requires even more logic and is usually more expensive. Not only does it detect memory errors, but it also corrects 1-bit ECC errors. If you have a 2-bit error, you will still have some problems. Some motherboards enable you to have ECC memory.
Older memory types
Fast Page Mode DRAM
Fast Page Mode DRAM is plain old DRAM as we once knew it. The problem with standard DRAM was that it maxes out at about 50 MHz.
EDO DRAM
EDO DRAM gave people up to 5% system performance increase over DRAM. EDO DRAM is like FPM DRAM with some cache built into the chip. Like FPM DRAM, EDO DRAM maxes out at about 50 MHz. Early on, some system makers claimed that if you used EDO DRAM you didn't need L2 cache in your computer to get decent performance. They were wrong. It turns out that EDO DRAM works along with L2 cache to make things even faster, but if you lose the L2 cache, you lose a lot of speed.

RDRAM - RIMM
Rambus, Inc, in conjunction with Intel has created new technology, Direct RDRAM, to increase the access speed for memory. RIMMs appeared on motherboards sometime during 1999. The in-line memory modules are called RIMMs. They have 184 pins and provide 1.6 GB per second of peak bandwidth in 16 bit chunks. As chip speed gets faster, so does the access to memory and the amount of heat produced. An aluminum sheath, called a heat spreader, covers the module to protect the chips from overheating.

SO RIMM
Similar in appearance to a SODIMM and uses Rambus technology.

Technology

DRAM (Dynamic Random Access Memory)
One of the most common types of computer memory (RAM). It can only hold data for a short period of time and must be refreshed periodically. DRAMs are measured by storage capability and access time.

Storage is rated in megabytes (8 MB, 16 MB, etc).

Access time is rated in nanoseconds (60ns, 70ns, 80ns, etc) and represents the amount of time to save or return information. With a 60ns DRAM, it would require 60 billionths of a second to save or return information. The lower the nanospeed, the faster the memory operates.

DRAM chips require two CPU wait states for each execution.

Can only execute either a read or write operation at one time.

FPM (Fast Page Mode)
At one time, this was the most common and was often just referred to as DRAM. It offered faster access to data located within the same row.

EDO (Extended Data Out)
Newer than DRAM (1995) and requires only one CPU wait state. You can gain a 10 to 15% improvement in performance with EDO memory.

BEDO (Burst Extended Data Out)
A step up from the EDO chips. It requires zero wait states and provides at least another 13 percent increase in performance.

SDRAM (Static RAM)
Introduced in late 1996, retains memory and does not require refreshing. It synchronizes itself with the timing of the CPU. It also takes advantage of interleaving and burst mode functions. SDRAM is faster and more expensive than DRAM. It comes in speeds of 66, 100, 133, 200, and 266MHz.

DDR SDRAM (Double Data Rate Synchronous DRAM)
Allows transactions on both the rising and falling edges of the clock cycle. It has a bus clock speed of 100MHz and will yield an effective data transfer rate of 200MHz.

Direct Rambus
Extraordinarily fast. By using doubled clocked provides a transfer rate up to 1.6GBs yielding a 800MHz speed over a narrow 16 bit bus.

Cache RAM
This is where SRAM is used for storing information required by the CPU. It is in kilobyte sizes of 128KB, 256KB, etc.

Other Memory Types
VRAM (Video RAM)
VRAM is a video version of FPM and is most often used in video accelerator cards. Because it has two ports, It provides the extra benefit over DRAM of being able to execute simultaneous read/write operations at the same time. One channel is used to refresh the screen and the other manages image changes. VRAM tends to be more expensive.

Flash Memory
This is a solid-state, nonvolatile, rewritable memory that functions like RAM and a hard disk combined. If power is lost, all data remains in memory. Because of its high speed, durability, and low voltage requirements, it is ideal for digital cameras, cell phones, printers, handheld computers, pagers and audio recorders.

Shadow RAM
When your computer starts up (boots), minimal instructions for performing the startup procedures and video controls are stored in ROM (Read Only Memory) in what is commonly called BIOS. ROM executes slowly. Shadow RAM allows for the capability of moving selected parts of the BIOS code from ROM to the faster RAM memory.

Memory (RAM) and its influence on performance
It's been proven that adding more memory to a computer system increases its performance. If there isn't enough room in memory for all the information the CPU needs, the computer has to set up what's known as a virtual memory file. In so doing, the CPU reserves space on the hard disk to simulate additional RAM. This process, referred to as "swapping", slows the system down. In an average computer, it takes the CPU approximately 200ns (nanoseconds) to access RAM compared to 12,000,000ns to access the hard drive. To put this into perspective, this is equivalent to what's normally a 3 1/2 minute task taking 4 1/2 months to complete!
Why does the RAM memory influence the computer performance?
At first, technically speaking, the RAM memory does not have any kind of influence on the processor performance of the computer: the RAM memory does not have the power of making the computer processor work faster, that is, the RAM memory does not increase the processing performance of the processor.
So, what is the relationship between the RAM memory and the performance? The story is not so simple as it seems and we will need to explain a little more how the computer works for you to understand the relationship between the RAM memory and the performance of the computer.
The computer processor search for instructions that are stored in the RAM memory of the computer to be executed. If those instructions are not stored in the RAM memory, they will have to be transferred from the hard disk (or from any other storage system, such as floppy disks, CD-ROMs and Zip-disks) to the RAM memory - the well-known process of "loading" a program.
So, a greater amount of RAM memory means that more instructions fit into that memory and, therefore, bigger programs can be loaded at once. All the present operating systems work with the multitask concept, where we can run more than one program at once. You can, for example, have a word processor and a spreadsheet open ("loaded") at the same time in the RAM memory. However, depending on the amount of RAM memory that your computer has, it is possible that those programs have too many instructions and, consequently, do not "fit" at the same time (or even alone, depending on the program) in the RAM memory.
At first, if you want the computer to load a program and it does not "fit" in the RAM memory because there is little of it installed in the computer or because it is already too full, the operating system would have to show a message like "Insufficient Memory".
But it does not happen because of a feature that all processors since the 386 have, called virtual memory. With this feature, the computer’s processor creates a file in the hard disk called swap file, that is used to store RAM memory data. So, if you attempt to load a program that does not fit in the RAM, the operating system sends to the swap file parts of programs that are presently stored in the RAM memory but are not being accessed, freeing space in the RAM memory and allowing the program to be loaded. When you need to access a part of the program that the system has stored in the hard disk, the opposite process happens: the system stores in the disk parts of memory that are not in use at the time and transfers the original memory content back.

The problem is that the hard disk is a mechanical system, and not an electronic one. This means that the data transfer between the hard disk and the RAM memory is much slower than the data transfer between the processor and the RAM memory. For you to have an idea of magnitude, the processor communicates with the RAM memory typically at a transfer rate of 800 MB/s (100 MHz bus), while the hard disks transfer data at rates such as 33 MB/s, 66 MB/s and 100 MB/s, depending on their technology (DMA/33, DMA/66 and DMA/100, respectively).
So, every time the computer performs a change of data from the memory to the swap file of the hard disk, you notice a slowness, since this change is not immediate.
When we install more RAM memory in the computer, the probability of “running out” of RAM memory and having the necessity to make a change with the hard disk swap file is smaller and, therefore, you notice that the computer is faster than before.
To have a clearer idea, suppose your computer has 64 MB of RAM memory and all the programs that are loaded (open) at the same time use 100 MB. This means that the system is using the virtual memory feature, making changes with the hard disk. However, if that same computer had 128 MB, it would not be necessary to make any changes with the hard disk (with the same programs loaded), making the computer faster.
The more peripherals you add to a computer, or the more advanced applications you ask it to perform, the more RAM it needs to operate smoothly.
Virtual Memory and its influences on performance

While virtual memory makes it possible for computers to more easily handle larger and more complex applications, as with any powerful tool, it comes at a price. The price in this case is one of performance — a virtual memory operating system has a lot more to do than an operating system that is not capable of virtual memory. This means that performance will never be as good with virtual memory than with the same application that is 100% memory-resident.
However, this is no reason to throw up one's hands and give up. The benefits of virtual memory are too great to do that. And, with a bit of effort, good performance is possible. The thing that must be done is to look at the system resources that are impacted by heavy use of the virtual memory subsystem.
Worst Case Performance Scenario
For a moment, take what you have read earlier, and consider what system resources are used by extremely heavy page fault and swapping activity:
• RAM -- It stands to reason that available RAM will be low (otherwise there would be no need to page fault or swap).
• Disk -- While disk space would not be impacted, I/O bandwidth would be.
• CPU -- The CPU will be expending cycles doing the necessary processing to support memory management and setting up the necessary I/O operations for paging and swapping.
The interrelated nature of these loads makes it easy to see how resource shortages can lead to severe performance problems. All it takes is:
• A system with too little RAM
• Heavy page fault activity
• A system running near its limit in terms of CPU or disk I/O
At this point, the system will be thrashing, with performance rapidly decreasing.

Best Case Performance Scenario
At best, system performance will present a minimal additional load to a well-configured system:
• RAM -- Sufficient RAM for all working sets with enough left over to handle any page faults
• Disk -- Because of the limited page fault activity, disk I/O bandwidth would be minimally impacted
• CPU -- The majority of CPU cycles will be dedicated to actually running applications, instead of memory management
From this, the overall point to keep in mind is that the performance impact of virtual memory is minimal when it is used as little as possible. This means that the primary determinant of good virtual memory subsystem performance is having enough RAM.

No comments: