Friday, August 3, 2007

SIMM AND DIMM MODULES

What is a SIMM?
A SIMM is a Single In-line Memory Module.
A SIMM is a small circuit board designed to hold a set of RAM chips.
Two types of SIMM's have been in general use. 30-pin SIMM's and 72-pin SIMM's.
30-bit SIMM's have 8-bit data buses; 72-pin SIMM's have 32-bit data buses.
The pinouts for all SIMMs is not completely standard. Some manufacturers, such as HP, Compaq, and IBM used non-standard pinouts in their proprietary SIMMs. If you are upgrading SIMMs, be sure to use compatible SIMMs.
SIMM's have largely been replaced with DIMM's.









What is a DIMM?
A DIMM is a Dual Inline Memory Module.
A DIMM is a small circuit board designed to hold a set of RAM chips.
A DIMM has a 64-bit data bus, which allows DIMM's to be connected one at a time to a CPU with a 64-bit data bus.
DIMM Sizes
DIMM's are available in several different physical sizes.
Contact Count Usage
72 SODIMM

144 SODIMM

168 SDRAM

184 DDR SDRAM

240 DDR2 SDRAM

DIMM contacts are single sided, which means that a DIMM with 72 contacts will have 36 contacts on each side.
DIMM Types
DIMM modules are sold according to clock speed (MHz), bus speed (mega-transfers per second), and transfer rate (megabytes per second.
Standard SDR SDRAM DIMM's
DIMM Module Chip Type Clock Speed Bus Speed Transfer Rate
PC66 10ns 66 66 533
PC100 8ns 100 100 800
PC133 7.5/7ns 133 133 1,066
Standard DDR SDRAM DIMM's
DIMM Module Chip Type Clock Speed Bus Speed Transfer Rate
PC1600 DDR200 100 200 1,600
PC2100 DDR266 133 266 2,133
PC2400 DDR300 150 300 2,400
PC2700 DDR333 166 333 2,667
PC3000 DDR366 183 366 2,933
PC3200 DDR400 200 400 3,200
PC3500 DDR433 216 433 3,466
PC3700 DDR466 233 466 3,733
PC4000 DDR500 250 500 4,000
PC4300 DDR533 266 533 4,266
Standard DDR2 SDRAM DIMM's
DIMM Module Chip Type Clock Speed Bus Speed Transfer Rate
PC2-3200 DDR2-400 200 400 3,200
PC2-4300 DDR2-533 266 533 4,266
PC2-5400 DDR2-667 333 667 5,333
PC2-6400 DDR2-800 400 800 6,400
168 Contact DIMM's
The earliest DIMM's were of the 168 contact variety. These DIMM's have come in a variety of configurations. It is possible to determine the configuration of the DIMM by examining the placement of notches on the bottom of the DIMM.
DIMM Notch Key One DIMM Notch Key Two
Position Meaning Position Meaning
1 Reserved 1 5.0V
2 Buffered 2 3.3V
3 Unbufered 3 Reserved

ROM

Read-Only Memory (ROM)
One major type of memory that is used in PCs is called read-only memory, or ROM for short. ROM is a type of memory that normally can only be read, as opposed to RAM which can be both read and written. There are two main reasons that read-only memory is used for certain functions within the PC:
• Permanence: The values stored in ROM are always there, whether the power is on or not. A ROM can be removed from the PC, stored for an indefinite period of time, and then replaced, and the data it contains will still be there. For this reason, it is called non-volatile storage. A hard disk is also non-volatile, for the same reason, but regular RAM is not.
• Security: The fact that ROM cannot easily be modified provides a measure of security against accidental (or malicious) changes to its contents. You are not going to find viruses infecting true ROMs, for example; it's just not possible. (It's technically possible with erasable EPROMs, though in practice never seen.)
Read-only memory is most commonly used to store system-level programs that we want to have available to the PC at all times. The most common example is the system BIOS program, which is stored in a ROM called (amazingly enough) the system BIOS ROM. Having this in a permanent ROM means it is available when the power is turned on so that the PC can use it to boot up the system. Remember that when you first turn on the PC the system memory is empty, so there has to be something for the PC to use when it starts up. See this section for a description of the system BIOS ROM; see here for a description of the system boot sequence.
While the whole point of a ROM is supposed to be that the contents cannot be changed, there are times when being able to change the contents of a ROM can be very useful. There are several ROM variants that can be changed under certain circumstances; these can be thought of as "mostly read-only memory". :^) The following are the different types of ROMs with a description of their relative modifiability:
• ROM: A regular ROM is constructed from hard-wired logic, encoded in the silicon itself, much the way that a processor is. It is designed to perform a specific function and cannot be changed. This is inflexible and so regular ROMs are only used generally for programs that are static (not changing often) and mass-produced. This product is analagous to a commercial software CD-ROM that you purchase in a store.
• Programmable ROM (PROM): This is a type of ROM that can be programmed using special equipment; it can be written to, but only once. This is useful for companies that make their own ROMs from software they write, because when they change their code they can create new PROMs without requiring expensive equipment. This is similar to the way a CD-ROM recorder works by letting you "burn" programs onto blanks once and then letting you read from them many times. In fact, programming a PROM is also called burning, just like burning a CD-R, and it is comparable in terms of its flexibility.
• Erasable Programmable ROM (EPROM): An EPROM is a ROM that can be erased and reprogrammed. A little glass window is installed in the top of the ROM package, through which you can actually see the chip that holds the memory. Ultraviolet light of a specific frequency can be shined through this window for a specified period of time, which will erase the EPROM and allow it to be reprogrammed again. Obviously this is much more useful than a regular PROM, but it does require the erasing light. Continuing the "CD" analogy, this technology is analogous to a reusable CD-RW.
• Electrically Erasable Programmable ROM (EEPROM): The next level of erasability is the EEPROM, which can be erased under software control. This is the most flexible type of ROM, and is now commonly used for holding BIOS programs. When you hear reference to a "flash BIOS" or doing a BIOS upgrade by "flashing", this refers to reprogramming the BIOS EEPROM with a special software program. Here we are blurring the line a bit between what "read-only" really means, but remember that this rewriting is done maybe once a year or so, compared to real read-write memory (RAM) where rewriting is done often many times per second!
Note: One thing that sometimes confuses people is that since RAM is the "opposite" of ROM (since RAM is read-write and ROM is read-only), and since RAM stands for "random access memory", they think that ROM is not random access. This is not true; any location can be read from ROM in any order, so it is random access as well, just not writeable. RAM gets its name because earlier read-write memories were sequential, and did not allow random access.

Finally, one other characteristic of ROM, compared to RAM, is that it is much slower, typically having double the access time of RAM or more. This is one reason why the code in the BIOS ROM is often shadowed to improve performance.

RAM IN DETAIL

What is RAM?
RAM is Random Access Memory. RAM is the area where your computer stores programs that you are currently running and data that you are currently working on.
RAM can be contrasted with disk storage. Disk storage holds all of your programs and all of your data -- whether you are working with them or not. When you turn off your computer, the contents of RAM instantly disappear, but the contents of your disk storage remain unharmed.
RAM is also sometimes contrasted with ROM. ROM (Read Only Memory) are memory chips which have had information stored on them which cannot be changed. Your motherboard may contain some ROM chips.
Types of RAM
The two main types of RAM are:
● Dynamic RAM (DRAM)
● Static RAM (SRAM)

What is DRAM?
DRAM is Dynamic Random Access Memory.
DRAM is the most common form of RAM.
When someone says that a computer has "one gigabyte of RAM", what they really mean is that the computer has one gigabyte of DRAM.
DRAM is called dynamic because it must constantly be refreshed or it will lose the data which it is supposed to be storing.
Refreshing DRAM consists of reading the contents from the DRAM and immediately writing them back to the DRAM.
DRAM is made up of large arrays of very small capacitors. Each of these capacitors is slowly leaking energy, and if the DRAM is not refreshed, eventually one or more of the capacitors will leak enough energy that a 1 will become a 0 and data corruption will occur.
DRAM is often contrasted with SRAM (Static RAM). SRAM is able to store data as long as power is applied to it, without needing to be refreshed. SRAM is also able to be faster than DRAM. The drawback, of course, is that SRAM is much more expensive than DRAM.
Both DRAM and SRAM lose their contents when the power to them is turned off.
What is SRAM?
SRAM is Static RAM.
SRAM is used in small amounts in computers where very fast RAM is required, such as in the L2 cache of many CPU's.
SRAM is often contrasted with DRAM (Dynamic RAM). Dynamic is much less expensive than SRAM, but is usually slower and must constantly be refreshed in order to preserve its contents.
Types of SRAM include:
● Asynchronous Static RAM
● Synchronous Burst Static RAM
● Pipeline Burst Static RAM

What is NVRAM?
NVRAM is Non-Volatile Random Access Memory.
NVRAM differs from DRAM and SRAM in that NVRAM retains the contents of its memory when the power is turned off.
NVRAM is typically implemented using flash memory, although pseudo-NVRAM designs consisting of battery backed-up Static RAM have also been utilized.

What is SDRAM?
SDRAM is Synchronous Dynamic RAM.
SDRAM is a variant of DRAM in which the memory speed is synchronized with the clock pulse from the CPU.
This synchronization enables the SDRAM to pipeline read and write requests. Pipelining enables the SDRAM to accept commands at the same time as it is processing other commands.
There are three variants of SDRAM:
● Single Data Rate SDRAM - SDR SDRAM
● Dual Data Rate SDRAM - DDR SDRAM
● Dual Data Rate 2 SDRAM - DDR2 SDRAM

What is SDR SDRAM?
SDR SDRAM is Single Data Rate SDRAM.
SDR SDRAM is the original SDRAM standard, which has since been replaced by DDR SDRAM.
DDR SDRAM doubles the bandwidth of SDR DRAM by transferring data twice per cycle on both edges of the clock signal, implementing burst mode data transfer.

Standard SDR SDRAM DIMMs
SDR SDRAM is normally packaged in DIMM modules.
SDRAM DIMM modules are sold according to clock speed (MHz), bus speed (megatransfers per second), and transfer rate (megabytes per second.
DIMM Module Chip Type Clock Speed Bus Speed Transfer Rate
PC66 10ns 66 66 533
PC100 8ns 100 100 800
PC133 7.5/7ns 133 133 1,066

DR SDRAM is Double Data Rate SDRAM.
DDR SDRAM is an improvement over regular SDRAM, also known as SDR SDRAM (Single Data Rate SDRAM).
DDR SDRAM doubles the bandwidth of SDR DRAM by transferring data twice per cycle on both edges of the clock signal, implementing burst mode data transfer.
DDR SDRAM is being supplanted by DDR2 SDRAM.
Standard DDR SDRAM DIMM's
DDR SDRAM is normally packaged in DIMM modules.
DIMM Module Chip Type Clock Speed Bus Speed Transfer Rate
PC1600 DDR200 100 200 1,600
PC2100 DDR266 133 266 2,133
PC2400 DDR300 150 300 2,400
PC2700 DDR333 166 333 2,667
PC3000 DDR366 183 366 2,933
PC3200 DDR400 200 400 3,200
PC3500 DDR433 216 433 3,466
PC3700 DDR466 233 466 3,733
PC4000 DDR500 250 500 4,000
PC4300 DDR533 266 533 4,266

Standards for DDR SDRAM
DDR SDRAM standards are still being developed and improved.
DDR SDRAM Standard Frequency (MHz) Voltage
DDR 400-533 2.5
DDR2
667-800 1.8
DDR3 1066 to ... 1.5
Higher frequencies enable higher rates of data transfer.
Lower voltages result in less heat radiation and longer batter life for portable computing devices. They also allow greater component density, which allows higher capacity in the same package size.
What is RDRAM?
RDRAM is Rambus DRAM.
RDRAM is a proprietary variant of DRAM which was developed by Rambus, Inc.
RDRAM incorporates technical advantages such as:
● Packet-based command protocol
● Command pipelining
● Data pipelining
● Low-voltage signaling
● Precise clocking
Manufacturers who wish to utilize RDRAM technology must pay royalties to Rambus Inc.

What is video RAM?
Video RAM is specialized RAM which is used on video cards.
Video RAM is dual-ported, which means it can be accessed by two different devices simultaneously. This enables data to be read from video RAM (i.e. sent to the computer monitor) at th same time data is written to video RAM.
What is flash memory?
Flash memory is memory which retains its contents even after power is removed.
Flash memory is a form of EAPROM (Electrically Alterable Programmable Read-Only Memory).
Each bit of data in a flash memory device is stored in a transistor called a floating gate. The floating gate can only be accessed though another transistor, the control gate.
The process the control gate uses to access the floating gate is a field emission phenomenon known as Fowler-Nordheim tunneling. Tunneling allows voltage to flow from the control gate to the floating gate through the dielectric layer of oxide which separates them.
Popular flash memory devices include:
● Sony's Memory Stick
● Compact Flash
● SDCard
● MultiMediaCard (MMC)








What is a Memory Stick?
A Memory Stick is an IC (Integrated Circuit) which is stored in a compact and rugged plastic enclosure. Memory Sticks are designed to store data and to enable the transfer of data between devices equipped with Memory Stick slots.
The Memory Stick standard was introduced by Sony in October of 1998.
Current Memory Stick capacities range up to 512MB.
A Memory Stick is 50mm long, 21.5mm wide, and 2.8mm thick.
An even more compact format, Memory Stick Duo, is 32mm long, 20mm wide, and 1.6mm thick.
The theoretical transfer speed of Memory Stick is 160Mbps.
What is Compact Flash?
A Compact Flash card is an IC (Integrated Circuit) which is stored in a compact and rugged plastic enclosure. Compact Flash cards are designed to store data and to enable the transfer of data between devices equipped with Compact Flash slots.
Current Compact Flash capacities range up to 4GB.
Compact Flash Type I cards are 43mm long, 36mm wide, and 3.3mm thick.
Compact Flash Type II cards are 43mm long, 36mm wide, and 5mm thick.
The theoretical transfer speed of Compact Flash 2.0 is 16MB/sec.
The Compact Flash standard was introduced by SanDisk Corporation in 1994.
Compact Flash Plus (CF+)
Compact Flash Plus (CF+) extends Compact Flash to provide functionality such as micro hard drives, modems, Ethernet cards, 802.11 Wi-Fi cards, serial cards, Bluetooth cards, and more.
This makes Compact Flash the most versatile of the flash media formats.
For more information on Compact Flash, refer to the Compact Flash Specification.

What is a SD Card?
A SD Card (Secure Digital Card) is an IC (Integrated Circuit) which is stored in a compact and rugged plastic enclosure. SD Cards are designed to store data and to enable the transfer of data between devices equipped with SD Card slots.
Current SD Card capacities range up to 1GB.
A SD Card is 32mm long, 24mm wide, and 2.1mm thick.
An even more compact format, the miniSD Card, is 20mm long, 21.5mm wide, and 1.4mm thick.
The theoretical transfer speed of a SD 1.0 Card is 12.5MB/s. SD 1.1 is expected to raise this to 50MB/s.
The SD Card standard was introduced by Toshiba, Matsushita Electric, and SanDisk in 1999.
SDIO
SDIO extends the SD Card standard to include 802.11b WiFi cards, Bluetooth cards, modems, GPS receivers, TV tuners, cameras, digital recorders, scanners, fingerprint scanners and more.
For more information on SD Card, visit the SD Card Association.

What is a MultiMediaCard (MMC)?
A MultiMediaCard (MMC) is an IC (Integrated Circuit) which is stored in a compact and rugged plastic enclosure. MultiMediaCard (MMC)s are designed to store data and to enable the transfer of data between devices equipped with MultiMediaCard (MMC) slots.
The MultiMediaCard (MMC) standard was introduced in November of 1997 by SanDisk and Siemens AG/Infeneon Technologies AG.
Current MultiMediaCard (MMC) capacities range up to 2GB.
A MultiMediaCard (MMC) is 32mm long, 24mm wide, and 1.4mm thick.
MultiMediaCards can be used in SD Card readers and writers.
The theoretical transfer speed of a MultiMediaCard is 2.5MB/s.

DIFFERENT TYPES OF RAM

Different RAM Types and its uses
Intro
The type of RAM doesn't matter nearly as much as how much of it you've got, but using plain old SDRAM memory today will slow you down. There are three main types of RAM: SDRAM, DDR and Rambus DRAM.
SDRAM (Synchronous DRAM)
Almost all systems used to ship with 3.3 volt, 168-pin SDRAM DIMMs. SDRAM is not an extension of older EDO DRAM but a new type of DRAM altogether. SDRAM started out running at 66 MHz, while older fast page mode DRAM and EDO max out at 50 MHz. SDRAM is able to scale to 133 MHz (PC133) officially, and unofficially up to 180MHz or higher. As processors get faster, new generations of memory such as DDR and RDRAM are required to get proper performance.
DDR (Double Data Rate SDRAM)
DDR basically doubles the rate of data transfer of standard SDRAM by transferring data on the up and down tick of a clock cycle. DDR memory operating at 333MHz actually operates at 166MHz * 2 (aka PC333 / PC2700) or 133MHz*2 (PC266 / PC2100). DDR is a 2.5 volt technology that uses 184 pins in its DIMMs. It is incompatible with SDRAM physically, but uses a similar parallel bus, making it easier to implement than RDRAM, which is a different technology.
Rambus DRAM (RDRAM)
Despite it's higher price, Intel has given RDRAM it's blessing for the consumer market, and it will be the sole choice of memory for Intel's Pentium 4. RDRAM is a serial memory technology that arrived in three flavors, PC600, PC700, and PC800. PC800 RDRAM has double the maximum throughput of old PC100 SDRAM, but a higher latency. RDRAM designs with multiple channels, such as those in Pentium 4 motherboards, are currently at the top of the heap in memory throughput, especially when paired with PC1066 RDRAM memory.

DIMMs vs. RIMMs
DRAM comes in two major form factors: DIMMs and RIMMS.
DIMMs are 64-bit components, but if used in a motherboard with a dual-channel configuration (like with an Nvidia nForce chipset) you must pair them to get maximum performance. So far there aren't many DDR chipset that use dual-channels. Typically, if you want to add 512 MB of DIMM memory to your machine, you just pop in a 512 MB DIMM if you've got an available slot. DIMMs for SDRAM and DDR are different, and not physically compatible. SDRAM DIMMs have 168-pins and run at 3.3 volts, while DDR DIMMs have 184-pins and run at 2.5 volts.
RIMMs use only a 16-bit interface but run at higher speeds than DDR. To get maximum performance, Intel RDRAM chipsets require the use of RIMMs in pairs over a dual-channel 32-bit interface. You have to plan more when upgrading and purchasing RDRAM.






From the top: SIMM, DIMM and SODIMM memory modules
Memory Speed
SDRAM initially shipped at a speed of 66MHz. As memory buses got faster, it was pumped up to 100MHz, and then 133MHz. The speed grades are referred to as PC66 (unofficially), PC100 and PC133 SDRAM respectively. Some manufacturers are shipping a PC150 speed grade. However, this is an unofficial speed rating, and of little use unless you plan to overclock your system.
DDR comes in PC1600, PC2100, PC2700 and PC3200 DIMMs. A PC1600 DIMM is made up of PC200 DDR chips, while a PC2100 DIMM is made up of PC266 chips. PC2700 uses PC333 DDR chips and PC3200 uses PC400 chips that haven't gained widespread support. Go for PC2700 DDR. It is about the cost of PC2100 memory and will give you better performance.
RDRAM comes in PC600, PC700, PC800 and PC1066 speeds. Go for PC1066 RDRAM if you can find it. If you can't, PC800 RDRAM is widely available.
CAS Latency
SDRAM comes with latency ratings or "CAS (Column Address Strobe) latency" ratings. Standard PC100 / PC133 SDRAM comes in CAS 2 or CAS 3 speed ratings. The lower latency of CAS 2 memory will give you more performance. It also costs a bit more, but it's worth it.
DDR memory comes in CAS 2 and CAS 2.5 ratings, with CAS 2 costing more and performing better.
RDRAM has no CAS latency ratings, but may eventually come in 32 and 4 bank forms with 32-bank RDRAM costing more and performing better. For now, it's all 32-bank RDRAM.
Understanding Cache
Cache Memory is fast memory that serves as a buffer between the processor and main memory. The cache holds data that was recently used by the processor and saves a trip all the way back to slower main memory. The memory structure of PCs is often thought of as just main memory, but it's really a five or six level structure:
The first two levels of memory are contained in the processor itself, consisting of the processor's small internal memory, or registers, and L1 cache, which is the first level of cache, usually contained in the processor.
The third level of memory is the L2 cache, usually contained on the motherboard. However, the Celeron chip from Intel actually contains 128K of L2 cache within the form factor of the chip. More and more chip makers are planning to put this cache on board the processor itself. The benefit is that it will then run at the same speed as the processor, and cost less to put on the chip than to set up a bus and logic externally from the processor.
The fourth level, is being referred to as L3 cache. This cache used to be the L2 cache on the motherboard, but now that some processors include L1 and L2 cache on the chip, it becomes L3 cache. Usually, it runs slower than the processor, but faster than main memory.
The fifth level (or fourth if you have no "L3 cache") of memory is the main memory itself.
The sixth level is a piece of the hard disk used by the Operating System, usually called virtual memory. Most operating systems use this when they run out of main memory, but some use it in other ways as well.
This six-tiered structure is designed to efficiently speed data to the processor when it needs it, and also to allow the operating system to function when levels of main memory are low. You might ask, "Why is all this necessary?" The answer is cost. If there were one type of super-fast, super-cheap memory, it could theoretically satisfy the needs of this entire memory architecture. This will probably never happen since you don't need very much cache memory to drastically improve performance, and there will always be a faster, more expensive alternative to the current form of main memory.
Memory Redundancy
One important aspect to consider in memory is what level of redundancy you want. There are a few different levels of redundancy available in memory. Depending on your motherboard, it may support all or some of these types of memory:
The cheapest and most prevalent level of redundancy is non-parity memory. When you have non-parity memory in your machine and it encounters a memory error, the operating system will have no way of knowing and will most likely crash, but could corrupt data as well with no way of telling the OS. This is the most common type of memory, and unless specified, that's what you're getting. It works fine for most applications, but I wouldn't run life support systems on it.
The second level of redundancy is parity memory (also called true parity). Parity memory has extra chips that act as parity chips. Thus, the chip will be able to detect when a memory error has occurred and signal the operating system. You'll probably still crash, but at least you'll know why.
The third level of redundancy is ECC (Error Checking and Correcting). This requires even more logic and is usually more expensive. Not only does it detect memory errors, but it also corrects 1-bit ECC errors. If you have a 2-bit error, you will still have some problems. Some motherboards enable you to have ECC memory.
Older memory types
Fast Page Mode DRAM
Fast Page Mode DRAM is plain old DRAM as we once knew it. The problem with standard DRAM was that it maxes out at about 50 MHz.
EDO DRAM
EDO DRAM gave people up to 5% system performance increase over DRAM. EDO DRAM is like FPM DRAM with some cache built into the chip. Like FPM DRAM, EDO DRAM maxes out at about 50 MHz. Early on, some system makers claimed that if you used EDO DRAM you didn't need L2 cache in your computer to get decent performance. They were wrong. It turns out that EDO DRAM works along with L2 cache to make things even faster, but if you lose the L2 cache, you lose a lot of speed.

RDRAM - RIMM
Rambus, Inc, in conjunction with Intel has created new technology, Direct RDRAM, to increase the access speed for memory. RIMMs appeared on motherboards sometime during 1999. The in-line memory modules are called RIMMs. They have 184 pins and provide 1.6 GB per second of peak bandwidth in 16 bit chunks. As chip speed gets faster, so does the access to memory and the amount of heat produced. An aluminum sheath, called a heat spreader, covers the module to protect the chips from overheating.

SO RIMM
Similar in appearance to a SODIMM and uses Rambus technology.

Technology

DRAM (Dynamic Random Access Memory)
One of the most common types of computer memory (RAM). It can only hold data for a short period of time and must be refreshed periodically. DRAMs are measured by storage capability and access time.

Storage is rated in megabytes (8 MB, 16 MB, etc).

Access time is rated in nanoseconds (60ns, 70ns, 80ns, etc) and represents the amount of time to save or return information. With a 60ns DRAM, it would require 60 billionths of a second to save or return information. The lower the nanospeed, the faster the memory operates.

DRAM chips require two CPU wait states for each execution.

Can only execute either a read or write operation at one time.

FPM (Fast Page Mode)
At one time, this was the most common and was often just referred to as DRAM. It offered faster access to data located within the same row.

EDO (Extended Data Out)
Newer than DRAM (1995) and requires only one CPU wait state. You can gain a 10 to 15% improvement in performance with EDO memory.

BEDO (Burst Extended Data Out)
A step up from the EDO chips. It requires zero wait states and provides at least another 13 percent increase in performance.

SDRAM (Static RAM)
Introduced in late 1996, retains memory and does not require refreshing. It synchronizes itself with the timing of the CPU. It also takes advantage of interleaving and burst mode functions. SDRAM is faster and more expensive than DRAM. It comes in speeds of 66, 100, 133, 200, and 266MHz.

DDR SDRAM (Double Data Rate Synchronous DRAM)
Allows transactions on both the rising and falling edges of the clock cycle. It has a bus clock speed of 100MHz and will yield an effective data transfer rate of 200MHz.

Direct Rambus
Extraordinarily fast. By using doubled clocked provides a transfer rate up to 1.6GBs yielding a 800MHz speed over a narrow 16 bit bus.

Cache RAM
This is where SRAM is used for storing information required by the CPU. It is in kilobyte sizes of 128KB, 256KB, etc.

Other Memory Types
VRAM (Video RAM)
VRAM is a video version of FPM and is most often used in video accelerator cards. Because it has two ports, It provides the extra benefit over DRAM of being able to execute simultaneous read/write operations at the same time. One channel is used to refresh the screen and the other manages image changes. VRAM tends to be more expensive.

Flash Memory
This is a solid-state, nonvolatile, rewritable memory that functions like RAM and a hard disk combined. If power is lost, all data remains in memory. Because of its high speed, durability, and low voltage requirements, it is ideal for digital cameras, cell phones, printers, handheld computers, pagers and audio recorders.

Shadow RAM
When your computer starts up (boots), minimal instructions for performing the startup procedures and video controls are stored in ROM (Read Only Memory) in what is commonly called BIOS. ROM executes slowly. Shadow RAM allows for the capability of moving selected parts of the BIOS code from ROM to the faster RAM memory.

Memory (RAM) and its influence on performance
It's been proven that adding more memory to a computer system increases its performance. If there isn't enough room in memory for all the information the CPU needs, the computer has to set up what's known as a virtual memory file. In so doing, the CPU reserves space on the hard disk to simulate additional RAM. This process, referred to as "swapping", slows the system down. In an average computer, it takes the CPU approximately 200ns (nanoseconds) to access RAM compared to 12,000,000ns to access the hard drive. To put this into perspective, this is equivalent to what's normally a 3 1/2 minute task taking 4 1/2 months to complete!
Why does the RAM memory influence the computer performance?
At first, technically speaking, the RAM memory does not have any kind of influence on the processor performance of the computer: the RAM memory does not have the power of making the computer processor work faster, that is, the RAM memory does not increase the processing performance of the processor.
So, what is the relationship between the RAM memory and the performance? The story is not so simple as it seems and we will need to explain a little more how the computer works for you to understand the relationship between the RAM memory and the performance of the computer.
The computer processor search for instructions that are stored in the RAM memory of the computer to be executed. If those instructions are not stored in the RAM memory, they will have to be transferred from the hard disk (or from any other storage system, such as floppy disks, CD-ROMs and Zip-disks) to the RAM memory - the well-known process of "loading" a program.
So, a greater amount of RAM memory means that more instructions fit into that memory and, therefore, bigger programs can be loaded at once. All the present operating systems work with the multitask concept, where we can run more than one program at once. You can, for example, have a word processor and a spreadsheet open ("loaded") at the same time in the RAM memory. However, depending on the amount of RAM memory that your computer has, it is possible that those programs have too many instructions and, consequently, do not "fit" at the same time (or even alone, depending on the program) in the RAM memory.
At first, if you want the computer to load a program and it does not "fit" in the RAM memory because there is little of it installed in the computer or because it is already too full, the operating system would have to show a message like "Insufficient Memory".
But it does not happen because of a feature that all processors since the 386 have, called virtual memory. With this feature, the computer’s processor creates a file in the hard disk called swap file, that is used to store RAM memory data. So, if you attempt to load a program that does not fit in the RAM, the operating system sends to the swap file parts of programs that are presently stored in the RAM memory but are not being accessed, freeing space in the RAM memory and allowing the program to be loaded. When you need to access a part of the program that the system has stored in the hard disk, the opposite process happens: the system stores in the disk parts of memory that are not in use at the time and transfers the original memory content back.

The problem is that the hard disk is a mechanical system, and not an electronic one. This means that the data transfer between the hard disk and the RAM memory is much slower than the data transfer between the processor and the RAM memory. For you to have an idea of magnitude, the processor communicates with the RAM memory typically at a transfer rate of 800 MB/s (100 MHz bus), while the hard disks transfer data at rates such as 33 MB/s, 66 MB/s and 100 MB/s, depending on their technology (DMA/33, DMA/66 and DMA/100, respectively).
So, every time the computer performs a change of data from the memory to the swap file of the hard disk, you notice a slowness, since this change is not immediate.
When we install more RAM memory in the computer, the probability of “running out” of RAM memory and having the necessity to make a change with the hard disk swap file is smaller and, therefore, you notice that the computer is faster than before.
To have a clearer idea, suppose your computer has 64 MB of RAM memory and all the programs that are loaded (open) at the same time use 100 MB. This means that the system is using the virtual memory feature, making changes with the hard disk. However, if that same computer had 128 MB, it would not be necessary to make any changes with the hard disk (with the same programs loaded), making the computer faster.
The more peripherals you add to a computer, or the more advanced applications you ask it to perform, the more RAM it needs to operate smoothly.
Virtual Memory and its influences on performance

While virtual memory makes it possible for computers to more easily handle larger and more complex applications, as with any powerful tool, it comes at a price. The price in this case is one of performance — a virtual memory operating system has a lot more to do than an operating system that is not capable of virtual memory. This means that performance will never be as good with virtual memory than with the same application that is 100% memory-resident.
However, this is no reason to throw up one's hands and give up. The benefits of virtual memory are too great to do that. And, with a bit of effort, good performance is possible. The thing that must be done is to look at the system resources that are impacted by heavy use of the virtual memory subsystem.
Worst Case Performance Scenario
For a moment, take what you have read earlier, and consider what system resources are used by extremely heavy page fault and swapping activity:
• RAM -- It stands to reason that available RAM will be low (otherwise there would be no need to page fault or swap).
• Disk -- While disk space would not be impacted, I/O bandwidth would be.
• CPU -- The CPU will be expending cycles doing the necessary processing to support memory management and setting up the necessary I/O operations for paging and swapping.
The interrelated nature of these loads makes it easy to see how resource shortages can lead to severe performance problems. All it takes is:
• A system with too little RAM
• Heavy page fault activity
• A system running near its limit in terms of CPU or disk I/O
At this point, the system will be thrashing, with performance rapidly decreasing.

Best Case Performance Scenario
At best, system performance will present a minimal additional load to a well-configured system:
• RAM -- Sufficient RAM for all working sets with enough left over to handle any page faults
• Disk -- Because of the limited page fault activity, disk I/O bandwidth would be minimally impacted
• CPU -- The majority of CPU cycles will be dedicated to actually running applications, instead of memory management
From this, the overall point to keep in mind is that the performance impact of virtual memory is minimal when it is used as little as possible. This means that the primary determinant of good virtual memory subsystem performance is having enough RAM.

KEY BOARD CONSTRUCTION AND OPERATION

Keyboard Construction and Operation
In general terms, the operation of a PC keyboard is fairly simple: you press keys on the keyboard, causing an electrical connection to be made. This causes the keyboard to send a signal to the PC, telling it what key or keys were pressed. Fairly simple stuff, at least in theory.
You might be surprised, however, just how much complexity is involved in allowing these signals to be sent to the PC--there is a lot going on "behind the scenes". You also might not realize just how many different ways there are to make the central components in the keyboard, or how many design issues must be taken into account in making a good keyboard. These design characteristics are what determine the critical comfort and feel factors that make you prefer one keyboard over another. They also dictate the durability and hence longevity of the keyboard.
In this section I describe the construction and operation of the keyboard. I start with sections covering the two most important sets of parts in the keyboard: the keycaps and the keyswitches. I then describe the other physical components that make up the keyboard. Finally, I talk about the actual operation of the keyboard, including the internal circuitry of the keyboard and how it interfaces to the rest of the PC.


When I disassembled the original PC/XT keyboard I got from a friend,
upon removing the bottom circuit board, all the little spring-loaded clips
that are used to provide tactile feedback for each key sprang loose. This
is one reason why I don't recommend disassembling your keyboard. :^)
Warning: As you read this description of keyboard technology, you might be tempted to disassemble your own keyboard to see the parts and discover how they work. If you are very careful and proceed slowly, you will probably be OK. Be forewarned, however, that keyboards in general are the "Humpty Dumpty" of PC components: some types, if you disassemble them too far, can come apart into hundreds of small pieces, and well, "all the king's horses and all the king's men…" You know the drill. :^) Even something as simple as removing a special keycap such as the one for the can lead to great frustration trying to replace it.

HOW DOES AN OPTICAL MICE WORKS

How do Optical Mice Work?
We have all seen the fantastic progression in mouse technology which is the introduction of optical technology and doing away with the mouse ball. You may however be wondering how exactly does the optical mouse work? Well there are variations in the technologies from different manufacturers but the principles are all the same.
The "Eye"
The main component of the optical mouse is the Optical "eye". Microsoft were the first to come up with this technology and named it the Intellieye. What the Intellieye does is scan the surface under mouse. The Intellieye itself is a single LED (Light Emitting Diode) which it bounces the light off the surface. It also has a very tiny camera which takes pictures of the surface. The original mouse by Microsoft scanned the surface 1500 times a second, they have progressed on since then.
The DSP (Digital Signal Processor)
The digital signal processor receives the images that have been taken from the camera and analyses them for differences. It can pick up very fine differences in the pictures and from this it can determine how far the mouse has moved across your desk and at what speed. Coordinates are then sent to the computer which moves the cursor on the screen. The DSP can detect patterns and analyse them at a very high speed. The original Intellimouse explorer from Microsoft had a DSP running at 18 MIPS (Million Instructions per second). This type of speed is needed as if it did not react as quick as you it would cause very bad and jerky movements by the cursor on the screen.
Click here for the latest in optical mice and get the best prices on each
Problematic Surfaces
The camera incorporated in these mice do have a certain amount of limitations however which are always being worked on. The first is the type of surface that you use the mouse on. Surfaces that can cause problems are Glass and Mirrors and some 3-d mouse pads. The reason that the mouse has problems with mirrors is that because a camera is used the image that is sent to the DSP is always a reflection and so rendered useless.
Glass is another problematic surface but for a different reason. Where as a mirror will reflect the image the glass is a near perfect transparent material and so it doesn't have enough imperfections or patterns for the DSP to pick up on. Obviously with glass there would be an image below it to analyse but if your table was made of glass then the surface below it would be well to far away to analyse. If you are using a glass table you will simply need a mouse mat
Duel Sensors
With higher and higher expectations coming from technology the introduction of duel sensors has arrived. Working together on the same surface two sensors can be even more accurate when it comes to analysing patterns and movement. These sensors are positioned at an angle from each other to give two completely different views of the surface you are working on. This technology has been used effectively to allow faster movements across the desktop with your mouse, as well as increased precision for very slow and precise movements like that used in drawing programs or graphic design.
Sensor Size
The size of the sensor or to put it more accurately the size of the area which is scanned can make a difference in the accuracy of the tracked movements. Like it states on the Logitech website. If you are looking at an image through a window, the bigger the window the more detail you can extract from the image. Its the same with the optical mouse sensor. If you are scanning twice the size of the desktop as another mouse then the larger image will produce a greater accuracy of movement as it has more patterns to pick up on and track.

KEY BOARD AND MICE

Mice and Keyboards

Troubleshooting Mice

-Troubleshooting the mouse comes down to replacement in most cases. The mouse can and does go bad. This is usually do to the insides of the mouse getting gunked up. Most of your problems with this are do to bad care of the mouse. Its always a good idea to just clean the insides with a Q-tip and some alcohol.

-Also take note that if your mouse is functioning incorrectly it may be the cable itself, connector, keypads, or just worn parts that are causing you trouble. Sometimes just cleaning it can renew the life in the mouse. Before jumping to conclusions on your mouse, go ahead and clean it first.

Clean Rodent

-Shut system down and unplug mouse from port.

-While unplugging look and make sure no pins are broke or missing on your connections.

-Turn mouse over and twist ball plate. You may need to push down on plate while turning.

-Remove ball and clean with Alcohol

-With Q-tip with alcohol on it, clean roller bars. Clean all the interior of mouse and allow to dry

-Place ball back in and put ball plate back together securely

Clean Trackball

-The track ball is the same as above. Simply twist ring on base of track ball and remove ball itself.

Preventive Maintenance

-Try keeping work space clean. Clean your mouse pad itself and replace it as needed. Mouse pads have a tendency of growing fur after a while and getting nylon fragments into the mouse. Make a habit of cleaning the mouse regularly and your mouse will make it for years.

Conclusion

-Hopefully this helps your mouse operation. If not replacing the mouse is recommended. They are cheap these days so don't worry.

-Some other common problems that will cause aggravation is the keyboard. Keys tend to stick or not function at all. What can you do to fix this? You can clean the keyboard and pray. Or for 10.00 you can replace it..

Keyboard

-The keyboard seems to be one of those parts that goes bad real quick when you spill soda on it. The best thing to do is keep all liquids off the desktop. Keyboards in general work well and take a good beating. Sometimes keys and key pad sensors will go bad. This is just normal wear and tear on the keyboard.

-Now if you spilt a liquid on the keyboard you may want to remove one key at a time and clean underneath them. This will take a while. If this doesn't sound like a good idea then we can clean it air. By using some compressed air and blow under keys. This may not do much good if the liquid is soda or something with sugar in it. Yes, it will dry but the keyboard will be have a tendency to stick. Now if a key is sticking you can pry key off with some finesse and clean with alcohol.

-Another common problem is keyboard isn't sending a signal to the computer. Check connection first. If this seems to be okay then your keyboard may be faulty itself.

-If your keyboard is a constant problem simply replace it and forget about it. They cost 10-20 for a typical Windows keyboard. Make sure and get one with a Ps/2 connector when possible.

SATA HDD NOT DETECTED

Im using a Gigabyte GA-7VT600-PL motherboard (Rev 1.0), Award BIOS version F7 (one short of the newest version F8 which caused some CDROM loading problems for my Acronis True Image so reverted to F7), 1GB OCZ DDR Ram (1x256MB PC3200, 1x256MB PC3200, 1x512MB PC3200), Enermax EG-365P-VE 350W PSU, Lite-On SOHW 832S DVDRW, HP 8200+ CD Writer, 160GB Seagate SATA-II.5 7200.9 HDD, ATI Radeon 9800 Pro AGP, Soundblaster Live! 5.1 PCI. The system has been rock stable for over a year with 2 PATA HDDs.

The problem started as soon as I installed the 160GB SATA HDD replacing my 2 PATA drives (30GB Maxtor, 40GB Maxtor). The setup right now has the single SATA drive attached to SATA0 socket on the motherboard (using the Gigabyte supplied SATA cable that have been sitting in my closet for about 2 years now) and the two optical drives attached to the IDE1 channel (as always). I went into the BIOS and adjusted the INTEGRATED PERIPHERALS - OnChip IDE Channel0 to DISABLED (since nothing was now attached to the channel), I level OnChip IDE Channel1 to ENABLED, OnChip Serial ATA = ENABLED, SATA Mode = IDE. Under ADVANCED BIOS FEATURES there is a Hard Disk Boot Priority however there was only "Bootable Add-In Cards" mentioned so I skipped it. Back under STANDARD CMOS Features there was now IDE Channel 2 and 3 Master however its setting was NONE. This is how the IDE Channels read:

IDE Channel 0 Master: None
IDE Channel 0 Slave: None
IDE Channel 1 Master: Lite-On DVDRW SOHW 832S
IDE Channel 1 Slave: HP CDRW 8200+
IDE Channel 2 Master: None
IDE Channel 3 Master: None

I even tried IDE Auto Detection for IDE Channel 2 and 3 and it came back as NONE before but now each time I use it the BIOS appears to crash -- It says DETECTING HARD DRIVES and after 15 to 20secs the red information window disappears leaving a blank blue gap where there used to be the words "IDE Auto-Detection", "Extended IDE Drive - Auto", and "Access Mode - Auto". I'm unable to use the BIOS any further and must reset.

Man what is going on here?!

While the PC was on I touched the 160GB Seagate SATA drive and yup its running and warm so power is going to it. The SATA cable looks fine from motherboard to drive. Again Im using the SATA0 socket on the mobo. The SATA cables were not forced in improperly (I did see the keying and matched it correctly). I also moved on and tried using Seagates SeaTools drive Utility but it too is unable to recognize that a SATA drive even exists.

Please, anyone have any ideas? In the meantime I will try upgrading to the final BIOS update for my mobo to see if this helps. I'll even reset my BIOS settings to Optimal/Default.

Thanks.

*UPDATE*

Ok i tried upgrading the BIOS to version F8 (latest - Oct 2004) and even tried loading Optimized/Fail-Safe BIOS settings and still everything mentioned above remained the same. Can someone please tell me how to resolve this?

In the meantime I will try reattaching the SATA connectors and switching from SATA0 to SATA1 scoket on the mobo.

HDD RAID LEVELS

What is RAID?
RAID (Redundant Array of Independent Disks) is a set of technology standards for teaming disk drives to improve fault tolerance and performance.
RAID Levels
Level Name
0 Striping
1 Mirroring
2 Parallel Access with Specialized Disks
3 Synchronous Access with Dedicated Parity Disk
4 Independent Access with Dedicated Parity Disk
5 Independent Access with Distributed Parity
6 Independent Access with Double Parity
Choosing a RAID Level
Each RAID level represents a set of trade-offs between performance, redundancy, and cost.
RAID 0 -- Optimized for Performance
RAID 0 uses striping to write data across multiple drives simultaneously. This means that when you write a 5GB file across 5 drives, 1GB of data is written to each drive. Parallel reading of data from multiple drives can have a significant positive impact on performance.
The trade-off with RAID 0 is that if one of those drives fail, all of your data is lost and you must retore from backup.
RAID 0 is an excellent choice for cache servers, where the actual data being stored is of little value, but performance is very important.
RAID 1 -- Optimized for Redundancy
RAID 1 uses mirroring to write data to multiple drives. This means that when you write a file, the file is actually written to two disks. If one of the disks fails, you simply replace it and rebuild the mirror.
The tradeoff with RAID 1 is cost. With RAID 1, you must purchase double the amount of storage space that your data requires.
RAID 5 -- A Good Compromise
RAID 5 stripes data across multiple disks. RAID 5, however, adds a parity check bit to the data. This slightly reduces available disk capacity, but it also means that the RAID array continues to function if a single disk fails. In the event of a disk failure, you simply replace the failed disk and keep going.
The tradeoffs with RAID 5 are a small performance penalty in write operations and a slight decrease in usabable storage space.
RAID 0+1 -- Optimize for Performance and Redundancy
RAID 0+1 combines the performance of RAID 0 with the redundancy of RAID 1.
To build a RAID 0+1 array, you first build a set of RAID 1 mirrored disks and you then combine these disk sets in a RAID 0 striped array.
A RAID 0+1 array can survive the loss of one disk from each mirrored pair. RAID 0+1 cannot survive the loss of two disks in the same mirrored pair.













What is striping?
Striping is the automated process of writing data across multiple drives simulteneously. Striping is used to increase the performance of disk reads.
When using striping, if you write a 5GB file across 5 drives, 1GB of data is written to each drive. Parallel reading of data from multiple disks can have a significant positive impact on performance, because the physical disk drive is most often the performance bottleneck.
Striping is used in RAID Level 0.
If one drive in a striped set fails, all of the data in the stripe set is lost. The data must be restored from backup. Because of this, striping is often combined with the use of parity (RAID 5) or mirroring (RAID 0+1).
Performance Problems Cause by Striping
Striping, when combined with parity, can have a negative performance impact on write operations. This is because some of the data used to calculate parity may be stored on the disk already. This means that the process to write to the array is:
1. Read the existing data
2. Calculate the parity
3. Write the new parity
4. Write the new data

What is mirroring?
Mirroring is the automated process of writing data to two drives simulteneously. Mirroring is used to provide redundancy.
If one drive fails, the redundant drive will continue to store the data and provide access to it. The failed drive can then be replaced and the drive set can be re-mirrored.
Mirroring is used in RAID Level 1.
Software Mirroring vs. Hardware Mirroring
Disk mirroring can be implemented entirely in Software. Software mirroring can be less expensive, but it is also slower. Software mirroring requires the host computer to write the mirrored data twice.
Disk mirroring can be implemented in hardware on the host I/O controller. The burden of writing each bit of data twice is placed upon the I/O controller, which is specifically designed for it.
Disk mirroring can also be implemented in hardware on an external storage device, such as a RAID array. In this case, mirroring is completely removed from the hosts responsibility.
Hot Swappable Hardware
If the hardware is hot swappable, it is possible to replace a failed disk without powering off the computer. You take out the old drive and put in the new drive with no service outage.
If the hardware does not support hot-swap, you must schedule a service outage, shut down and power-off the system, and then replace the drive.
Mirroring vs. Duplexing
Mirroring is the technique of using redundant disks. Duplexing is mirroring, with the addition of redundant host I/O controllers.
If you are using mirroring and your host I/O controller fails, you will not be able to access your data until you replace the host I/O controller. With duplexing, your data will still be available through the redundant controller.

HDD GUIDE

HARD DISK DRIVE GUIDE
How a Hard Disk Drive Works
Hard Disk Assembly
Last updated: 2/5/2002
The purpose of this article is to provide just the right balance of technical detail to convey a good insight into the innards of a hard disk drive and how if basically works without burdening the reader with excessive technical detail.
HARD DISK ASSEMBLY. A hard disk drive consists of a motor, spindle, platters, read/write heads, actuator, frame, air filter, and electronics. The frame mounts the mechanical parts of the drive and is sealed with a cover. The sealed part of the drive is known as the Hard Disk Assembly or HDA. The drive electronics usually consists of one or more printed circuit boards mounted on the bottom of the HDA.
A head and platter can be visualized as being similar to a record and playback head on an old phonograph, except the data structure of a hard disk is arranged into concentric circles instead of in a spiral as it on a phonograph record (and CD-ROM). A hard disk has one or more platters and each platter usually has a head on each of its sides. The platters in modern drives are made from glass or ceramic to avoid the unfavorable thermal characteristics of the aluminum platters found in older drives. A layer of magnetic material is deposited/sputtered on the surface of the platters and those in most of the drives I've dissected have shiny, chrome-like surfaces. The platters are mounted on the spindle which is turned by the drive motor. Most current IDE hard disk drives spin at 5,400, 7,200, or 10,000 RPM and 15,000 RPM drives are emerging.
HARD DISK DRIVE GUIDE
How a Hard Disk Drive Works
Heads
Last updated: 2/27/2005
HEADS. The heads (or Winchester sliders) are spring-loaded airfoils and actually fly like an airplane above (or below) the surface of the platters at a distance measured in micro-inches. The air stream though which a head "fly" is caused by the motion of the platters spinning through the air inside the HDA. The platters drag the air along by friction. The higher pressure air between the heads and the platters is known as air bearing. The effect is somewhat like a puck in an air hockey game. The bottom of a head is called an air bearing surface. This sort of mechanism was introduced in the Winchester hard disk drive invented by IBM in 1973.
The heads are extremely small electromagnets (about 1 mm square) and one is shown schematically to the right (for a prettier and more detailed picture with separate read and write elements, click here). Information is stored on the platters by sending pulses of current from the drive electronics to the head. The direction of the current and thus the direction of the diverging magnetic field across the gap in the head determines the direction the magnetic domains (little bitty, molecular magnets) on a particular spot on the platter's magnetic coating, and, thus, whether the spot represents a binary one or zero. The domains essentially retain their directional bent (whether the computer is on or off) until "told" to do otherwise by the drive electronics, which take their orders from the rest of the computer and ultimately from software. The complexity of the mechanisms and methods associated with doing all of this will be omitted here.
The heads are bonded to a metal suspension (or head arm), which is a small arm that holds the head in position above or beneath a disk. A head and suspension is called a head-gimbal assembly or HGA. The HGA's are stacked together Into a head-stack assembly, which is propelled across the disk surface by the actuator. The actuator on most recent hard disks employs a voice coil mechanism. It functions much like the voice coil in a loud speaker, thus its name. It consists of a curved magnet (or magnets--very strong ones) and a spring-loaded coil of fine wire which is attached to the read/write heads by head arms. The head arms are attached to, and pivot about an actuator shaft. When the drive electronics apply an electric current to the actuator coil, it interacts with the magnet and swings against the actuator spring. The heads rotate around the actuator shaft in the opposite direction of the coil movement, inward and outward from the center to the edges of the platters. If there is a power outage (e.g., you turn-off the computer) the spring, which counterbalances the electromagnetic force between the coil and magnet, takes over and automatically parks (lands them on skids or nanosliders--like pontoons on a sea plane) and locks the heads on a part of the platters called a landing zone (like an airport runway only curved) before they can crash (like an airplane) on, and mar that part of the surface of the platters where data is stored. When power is restored, the platters speed-up and the heads take off (like a tethered model airplane, except the ground moves--and those on the bottoms of the platters can fly up-side-down) and start flying again--an extraordinary mechanism...
One no longer has to park a hard disk before moving the computer as was the case in times of old when actuators were moved by devices known as stepper motors. However, if the power jitters repeatedly or the drive is subjected whack from a frustrated user, a crash can occur.

HARD DRIVE Guide

Hard Disk Drive (HDD) - Guide
Describing the Hard Disk
Hard Disks are magnetic media - electrons are arranged on strong circular metal plates called platters.As technology has grown more and more complex, the Arial density level increases, allowing for larger amounts of storage. There is an end to conventional storage in sight though - the Super Paramagnetic Limit, once this limit is passed, electrons begin to disperse and interact with each other in ways that corrupts all data on the disk.
Forum Factors
A single platter is not enough for the storage needs of today, and it never really was. The super-paramagnetic limit restricts the amount of data each platter holds, so hard disk companies need to use common tricks to get around it: using more than one platter, or increasing the diameter of a platter. Because of these tricks, there are two distinctly different hard drive form factors:
3.5" - The same shape as a 3.5" floppy drive.
5.25"- The same shape as a CD-ROM Drive.
For Notebooks, the sizes need to be even smaller. The most common size is 2.5 inches, but some are 1 inch in size! IBM's new matchbox drive is literally the same size as a match box, with a single platter supporting in excess of 300 megabytes.
The problem with increasing platter size are threefold. Not only are there PC Case size limitations, but larger platters need more powerful motors, and are more susceptible to vibration damage. With the effects of gravity, uniform flatness is also a serious issue.
Capacity
The largest consumer level drive today has 19.2 Gigabytes of space. These totals can be deceptive though - the actual formatted and usable storage area is often less than what is advertised on the boxes of today's hard disks. It's not that the manufactures are outright lying, instead they are taking advantage of the fact that there's no standard set for how to describe a drives storage capacity. Here's and explanation snipped from a hard drive review at Zdnet
This results from a definitional difference among the terms kilobyte (K), megabyte (MB), and gigabyte (GB). In short, here we use the base-two definition favored by most of the computer industry and used within Windows itself, whereas hard drive vendors favor the base-10 definitions. With the base-two definition, a kilobyte equals 1,024 (210) bytes; a megabyte totals 1,048,576 (220) bytes, or 1,024 kilobytes; and a gigabyte equals 1,073,741,824 (230) bytes, or 1,024 megabytes. With the base-10 definition used by storage companies, a kilobyte equals 1,000 bytes, a megabyte equals 1,000,000 bytes, and a gigabyte equals 1,000,000,000 bytes.
Put another way, to a hard drive manufacturer, a drive that holds 6,400,000 bytes of data holds 6.4GB; to software that uses the base-two definition--including CHKDSK, and portions of Windows 95 and Windows 98--the same drive holds 6GB of data, or 6,104MB.
So, be prepared when you format that new 6.4GB drive and find only 6GB of usable storage space. Isn't marketing wonderful?
Rotations per Minute (RPM)
The platters in a hard disk are connected in the middle by a spindle and motor. The motor spins the platters at a specific rate, known as RPM. Higher speeds allow data to be read/written much faster, along with reducing seek time. However, it's also a proportional increase in heat. The following list shows the typical RPM of today's hard disks:
3600 RPM (Pre-IDE)
5200 RPM (IDE)
5400 RPM (IDE/SCSI)
7200 RPM (IDE/SCSI)
10000 RPM (SCSI)
At higher speeds there is more stress and heat on the platters and electronics, 10000 RPM seems to be the current maximum. With the advent of Ultra-ATA 66, manufactures are working hard to release 10000 RPM IDE drives. They promise even lower access times and higher transfer rates, but are still not nearly as effective as their SCSI counterparts in terms of CPU Utilization and thoroughput."
The Hard Disk Cache
A small amount of memory incorporated into the hard disk electronics to accelerate read/write times. When the computer requests data from the hard disk if that data is in the cache, there is a performance boost directly related to the speed of the cache.
A visual representation: Imagine you are assembling something and have a box of different size screws, you need eight identical screws for this step in the project. When you look into the box (hard disk) for the first screw you happen to see 5 of the eight that you need so you grab the five that you see and put them onto the table (Cache) now when you need the next screw you won't have to dig into the box, instead you grab one from the table (Cache). Much faster than digging into the box each time.
The hard disk cache controller works in a similar manner except instead of seeing the data needed the cache controller guesses and reads a small amount of data just before and just after the data it was requested. When the program requests more data the hard drive first looks into the cache to see if the data is there.
The Hard Disk Cache is also used as a queue - if there is more than one operation to carry out, the instructions can be left in the Cache.
Technology and Specifications
Technical Settings
Although you may never need to know the specific settings for your hard disk, your BIOS does. The settings are essentially map guides - detailing how much room (in bytes), how many tracks, sectors, heads and cylinders are on your hard disk.
Tracks: Hard Disk platters arrange data into concentric circles, rather than one large spiral, as some other mediums use. Each circle is called a Track.
Sectors: The smallest addressable unit on a Track. Sectors are normally 512 bytes in size, and there can be hundreds of sectors per track, depending on location.
Heads: The devices used to write and read data on each platter.
Cylinders: The number of tracks on a platter. This one is a bit hard to explain: Platters on a hard disk are stacked up, and so are the heads. Because of this, all of the heads move simultaneously, so they can read separate tracks, but technically at the same physical location (only on at a different platter). If you combine the concentric circles on each platter being accessed by the drive heads, you get a Cylinder.
The Cylinder number keeps track of how many there are from inside to out, although it means the same thing as the amount of tracks on one platter.
Interrupt 13h Problem
Early on in the evolution of PCs, the standard for hard disks was to for the BIOS to use Interrupt 13h for setting hard disk information. However, the IDE interface also needs to set the information, but lacks the same number of bits for each part! Because of this, each number is reduced to the lowest of the two, as shown in the diagram below.
BIOS HARD DISK RESULT
Maximum Sectors/Track 63 225 63
Number of Heads 255 16 16
Number of Cylinders 1024 65536 1024
Maximum Capacity 8.4GB 136.9GB 528MB
Interrupt 13h Workaround
Breaks through the 528 MB barrier through the use of a Logical Block Address (LBA). By modifying the BIOS to translate the information that is received into a 28-bit LBA, and instructing the BIOS to load the LBA driver from the hard disk, the Hard Disk is given enough room (bitspace) to load all of its information regardless of BIOS limits.
Maximum Cylinder Problem
PC's ran into problems again with Maximum Cylinder Limit. Although most systems by then had employed translation to get past the first Int13h problem, the amount of bitspace allocated was not enough to get past 4096 cylinders, which was quickly being surpassed. This limited hard disk space to 1.97 or 2.1 Gigabytes. The problem was only solved through a new BIOS translation mode, or a new BIOS altogether.
The 8GB BIOS problem
The final and most pervasive limit to hard disks. The problem is no longer truncated numbers, but the actual total numbers the BIOS can recognize at all. The only way to get past this problem was through changes to the BIOS to enable Int13h extensions. It's problematic for some disk utilities, but with newer operating systems such as Windows 95 and Windows 98, the OS is already set up to recognize it.
Super Paramagnetic Limit
The density at which opposite magnetic charges begin to degrade each others signatures, resulting in data loss. The limit is at roughly 20 Gigabits per square inch, which is 4 times greater than today's popular 5 Gigabit per square inch limit.
Read/Write Heads
Essentially electronically controlled magnets. The heads are responsible for converting electrical signals into magnetic data streams on the hard disk and vice versa. Despite their importance, the R/W heads are also the most volatile part of the hard disk assembly.
Because each head is literally a microscopic distance from touching the platters, there is a danger of collision between the the two. This is called a Head Crash, and can have catastrophic effects on your disk, including data loss or worse - physical platter damage. Although today's disks use heads that are even closer to the platters, superior shock suppression and disk enclosure technologies keep problems to a minimum.
There have been several different technologies for Read/Write heads, and each of them has brought dramatic increases in storage size. The most recently used is IBM's Giant Magnetoresistive Head (GMR), which has provided the latest 14-19 gigabyte densities. Newer GMR models have the ability to handle 10 gigabits (not gigabytes) of data per square inch.
The latest and most powerful technology, however, comes from Seagate's subsidiary, Quinta. Using a new 'Optically Assisted Winchester Technology' (OAW), they have managed to squeeze an incredible 40 gigabits of data into a square inch, well beyond the super paramagnetic limit. This breakthrough is obtained through the combination of fiber-optics, MEMS mirrors and specific RE-TM platter media. Quinta estimates the ability to hold at least 100,000 sectors per track while lowering part costs and increasing interface speed.
File Allocation Table Systems - FAT
Every computer needs a system to keep track of files on the hard disk - otherwise there are just random sectors on the disk with no way to interpret them. The system used is called the File Allocation Table.
FAT16: The file system used for MS-DOS. 16-bit numbers are used to represent cluster numbers, which allows for partitions of up to 2 Gigabytes. It was efficient for its time, but cluster sizes are far too large for today's large hard disks. It is also restricted in cluster size.
FAT32 / VFAT: The preferred file system for Microsoft Windows 95 and Windows 98. It is an extension to FAT16, providing 32-bit numbers for clusters. Like FAT16, there are cluster size restrictions.
NTFS: The file system for Microsoft Windows NT. NTFS is considerably better than FAT32 and has no cluster size restrictions, although at a certain point the slack space consumes so much of the hard disk that alternate systems are needed. NTFS also provides permission controls and RAID support.
These are the mainstream PC file systems - other notable systems include HPFS - IBM's OS/2 File System, the Unix File System, and the 64-bit BeOS file system.
Slack Space - The Cluster Size Problem
With any file system, each file is allocated 'clusters' to be placed into. Since each Cluster is a locked size, not every one can be filled - If a system uses 16 KB clusters, but the file is only 2 KB, the remaining space - 14 KB, is automatically wasted and unusable for other files. This can result in hundreds of megabytes in lost space.
The only solution is to partition your hard disks or use the FAT32 file system. By creating multiple partitions you lower the potential amount of lost space - you still lose some, but since the cluster size on each drive is lower you can greatly reduce the loss.
FAT16 Cluster
Size Maximum
Partition Size
2 KB 128 MB
4 KB 256 MB
8 KB 512 MB
16 KB 1 GB
32 KB 2 GB
The limits of FAT16 are obvious. Partitions of 512 MB or less are the most attractive. However trying to manage 512MB partitions on multi-gigabyte drives is a real pain, not many people want to deal with 12 logical drives on a 6.5GB hard disk. On top of that with several programs installed the Windows9X folder itself can easily grow to over 500MB.
FAT32Cluster
Size Maximum
Partitoin Size
4 KB 8 GB
8 KB 16 GB
16 KB 32 GB
32 KB 64 GB
FAT32 is the solution for large disk drives. There is a small performance loss in using FAT32 because of the increased amount of clusters however, the benifits far outway the performance loss. Managing the one or two partions is much easier than trying to figure out which drive your information is on or asking yourself "Did I backup that seventh partion or not?".
Hard Disk Interfaces
Integrated Drive Electronics (IDE)
The defacto consumer standard for drive interfaces. It's beaten out by SCSI in almost every way, but it wins because of the price.
Today's IDE interface has two channels which allow for two devices each, whether they be Hard Disks, CD-ROM's, or other storage drives. Transfer speeds automatically drop to the speed and capabilities of the slowest drive on a channel for compatibility reasons.
The original form of IDE is self named. It only allows hard disks on the channel, while offering a measly 2-3 MB/s of average transfer rate. Most IDE boards have only one channel, allowing only two drives (CD-ROM drives of the time used a floppy drive like interface based off of a Sound Card).
EIDE
A substantia improvement of IDE in order to keep SCSI from the mainstream. It provides improvements to drive throughput, capacity, as well as integrating dual channels for up to 4 devices combined. Non-HD support was also added by the first AT Attachment Packet Interface Mode (ATAPI) which added support for devices like CD-ROM and tape drives.
The throughput problem was solved basically by moving the IDE interface from the ISA to PCI/VLB bus. It also adds support for Direct Memory Access (DMA) mode, where the hard disk can transfer to RAM directly without the CPU being involved. With the PCI bus EIDE allows throughput of 6.66 MB/s, 8.33 MB/s, 13 MB/s, and 16 MB/s.
Ultra DMA (AKA DMA-33, Ultra ATA-33, Fast ATA-2)
The current step in the evolution of IDE. Ultra DMA doubles the burst transfer rate to 33.3 MB/s, while integrating Cyclical Redundancy Check (CRC) support. In order for this mode to operate however, it requires the drive, BIOS, Motherboard Chipset and software drivers to support it. In DOS, it simply reverts to EIDE. There is also an 18 inch cable limit.
Ultra DMA-66 (Ultra ATA-66)
Is the next step in IDE evolution, invented by Quantum Corporation. The maximum theoretical transfer rate rises to 66.6 MB/s. Again, revised BIOS, chipsets, drivers, and DMA-66 hard drives are needed to support this new mode. It's future viability seems up in the air right now, because the performance gains do not seem to be as extreme as the theoretical throughput assumes.
Small Computer System Interface (SCSI)
SCSI (pronounced 'Scuzzy') is the do-everything high speed bus interface. It provides support for literally dozens of devices simultaneously along with high speed transfer rates, multithreading, parity checking, and bus mastering. For the cost of an expansion slot and SCSI hard disk, CPU utilization can be dramatically reduced, especially in Windows NT.
The key to allowing so many devices is termination - the host adapter (beginning of the chain) and last device (end of the chain) must be terminated in order to keep the connection intact. The general difficulty involved in properly terminating devices (as well as configuring) has kept SCSI as a workstation/server solution. For most consumers, IDE provides a much cheaper, easier to maintain solution.
Specifications
Level Speed Width Devices SE HVD LVD
SCSI-1 5 MB/s 8 Bits 8 6m 25m N/A
SCSI-2
(Fast SCSI) 10 MB/s 8 bits 8 6m 25m N/A
SCSI-3
(Ultra SCSI) 20 MB/s 8 bits 8/4 1.5/3m 25/0m N/A
SCSI-3
(Fast Wide SCSI) 20 MB/s 16 bits 16 6m 25m N/A
SCSI-3
(Wide Ultra SCSI) 40 MB/s 16 bits 16/8/4 0/1.5/3 25/0/0 N/A
SCSI-3
(Ultra 2 SCSI) 40/80 8/16 8/2 N/A 12/25 12/25
SCSI-3
(Wide Ultra 2) 80 MB/s 16 bits 16/2 N/A 12/25 12/25
SCSI-3
(Ultra 3) 160 MB/s 16 bits ?? N/A ?? ??
SCSI-1
The original SCSI specification. SCSI-1 is rarely used, if at all now, because of the low transfer rates, bus width, and terrible maximum cable length support. At the time however, it was enough to break through the powerhouse 'Enhanced Small Device Architecture' (ESDI) spec.
SCSI-2
The current 'bottom level' of the SCSI specification - its generally used for Scanners and CD-R drives. The new spec added support for Tag-Queuing, which allowed instruction use regardless of whether data was currently travelling through the bus. The instructions could also be prioritized for out of order execution.
Fast SCSI-2 allowed for a doubled transfer rate of 10 MB/s through a 50-pin connector, but was meant for differential SCSI in order to combat noise on the bus.
SCSI-3
The current standard is actually a family of different commands:
Fast/Wide SCSI: The Bus width, transfer rate (with differential support), and device support doubled over SCSI-2.
Ultra SCSI: With a doubled clock speed over SCSI-2 and backwards compatibility, it allowed for double the transfer rate over the stale 8bit wide bus. This was the basis for further Ultra SCSI improvements.
Ultra Wide SCSI: Simply put, this is the combination of Ultra SCSI with the 16-bit data path of Fast/Wide.
Ultra 2 SCSI: The most currently used high speed SCSI technology. It is the first to implement LVD signals for less noise and higher transfer rates - as high as 80 MB/s!
Ultra 3 SCSI: Just recently ratified by the SCSI Trade Association. It doubles the transfer rate yet again to an astounding 160 MB/s while adding advanced CRC support and easy hot-plugging technology (Installing devices without reboot).
According to Quantum the Ultra 3 SCSI interface is basically the same improvement thats involved in DDR SDRAM.
Transceivers
A Device which determines how the data will travel between adapter and drive. There are currently 3 levels: Single Ended (SE), High Voltage Differential (HVD), and recently Low Voltage Differential (LVD). The most popular is also the cheapest - Single Ended. SE signal transmission requires that the current for all devices comes from the same source, which reduces possible speed with "line noise".
HVD provides improved termination abilities, less "noise", and more cable length to work with, but at the cost of increased power usage and a lack of available part types (like CD-ROM). The power levels also required higher cost parts to cope with the temperatures involved, as most of the voltage is supplied through the cable, rather than from a separate power connector.
The LVD transceiver is the latest electrical signal based technology, using both the powerful differential technology and the low power consumption of SE. The Ultra-2 specification also allows a bus speed of 80 MB/s and the ability to support SE devices simultaneously (at the cost of speed). Most companies get around this by bundling multiple buses to handle SE and LVD separately.
Incidentally, there is a 4th standard - but it's not as clear cut as the other technologies. The Fiber Channel and Signaling Interface (FC-PH) uses fiber-optics to transfer signals (electrical signals are still supported with hybrid cables). FC-PH offers high bandwidth with almost no problems in signal colliding - heavy duty performance for the network crowd. The performance does come at substantial cost though, and isn't really suited for the consumer crowd.
SCAM Technology
Stands for SCSI Configured automatically. When SCAM compliant devices are attached, software can automatically allocate IDs for each device.
Redundant Array of Independent Disks (RAID)
A subset of SCSI/IDE technology that allows the combining of two or more hard disks in various fashions to provide redundancy, and additional speed.
Almost all of the levels work off the theory of 'striping', where blocks of data are written across drives. Typical hard disks must read/write data concurrently - one after another to the same disk. RAID avoids this by writing concurrently to separate disks - in a 4 disk array that would allow 4 blocks to be written/read at once!
There are several levels of RAID, each with their own number. Those numbers are merely for identification - RAID 4 is not necessarily better than RAID 1, or vice versa.
0 - Striped Disk Array without Fault Tolerance - Breaks up files into blocks that run across drives, the hard disks are combined to act like one. Because information is written and read alternately to each drive speed is increased. There is no fault-tolerance however, so if a drive were to fail, all data on the array would be lost.
1 - Mirroring and Duplexing The RAID controller essentially writes to two or more disks simultaneously. Each drive contains the same information at all times, which provides you with a backup in case a drive fails.
2 - Hamming Code ECC - Data is striped across an array of disks by bits rather than blocks. For each word (2 bytes) written, a connected array with an equal number of disks simultaneously writes a "Hamming Code ECC word". This provides absolute fault tolerance with on-the-fly correction, but at the cost of a large amount of drives.
3 - Parallel transfer with parity - The Data Blocks are striped across disks, except that a separate drive is used to hold checksums (parity information).
4 - Independent Data disks with shared Parity disk - Entire Data Blocks are striped across disks, except that a separate drive is used to hold checksums (parity information).
5 - Independent Data disks with distributed parity blocks - Much like RAID 0 - except that parity is added to protect the data blocks written to the drives.The parity is written across with the data blocks, unlike RAID 4.
6 - Independent Data disks with two independent distributed parity schemes - This is basically an extension of RAID 5. Data Blocks + Parity Blocks are written across the drives, but there is also a second set of parity blocks to cross check the disks for errors. A rarely used solution.
7 - Optimized Asynchrony - A proprietary RAID setup involving the use of a hard coded real time operating system.
10 - Striping + Mirroring - Two or more sets of paired RAID 1 arrays are combined using RAID 0 to provide a single array that is redundant against failure. This means that if even the entire set of data fails, there is a complete backup.
53 - Striped Array of Parallel Disks with Parity - RAID 0 is used to stripe data across RAID 3 arrays, which means that the Parity Drives are also striped.
IEEE 1394 - FireWire
The next 'consumer-level' bus that is bound for motherboard integration and replacement of IDE. IEEE 1394 is a serial bus that promises to offer transfer rates of up to 50 MB/s with guaranteed or asynchronous transfers. It also supports up to 16 devices per channel, hot-swapping and automatic termination/ID assignment.
IEEE 1394 is geared to support all media drives, digital cameras/video cameras, and laser printers. It is currently available as a PCI card for digital video camera users, but is not expected to go mainstream for another year or two so the technology can mature. Although it offers small footprint ports (sort of like stunted serial ports) and fairly stable transfers the needed silicon needs to be shrunk further, and another internal channel needs to be added (or so it looks) before Intel plans on integrating it into their core logic chipsets.
Why not just call it 'FireWire'? The term is actually exclusive to Apple and their PowerPC enabled FireWire computers. The name is great slang, but for legal reasons don't expect to see the name on store shelves.
Links to more information
SCSI Trade Organization
http://www.scsita.org/
A good site for SCSI resources and information.
SCSI T10 Committee
http://www.symbios.com/t10/
The group responsible for setting all SCSI standards.
Quantum
www.quantum.com
Hard Disk manufacturer - site also contains good information on Ultra 3 SCSI.
Maxtor
www.maxtor.com
Hard Disk Manufacturer
Seagate
www.seagate.com
Hard Disk Manufacturer
Questions and Answers
What does MTBF stand for?
MTBF stands for Mean Time Before Failure. MTBF numbers represent the average number of working hours (when the disk is being used) before the device is expected to fail. Usually the number is in the 100,000s but can go up to 6 or 7 times higher.
Don't misunderstand the number though - its only an average. Your drive could stop functioning after only days, or even surpass the average by a large amount. The only real thing that is going to help you is the Warranty.
What does SMART stand for?
SMART stands for Self Monitoring And Reporting Technology. It is an additional silicon feature on some hard disks, providing a way to learn beforehand if a drive is malfunctioning. SMART uses techniques to detect bad sector formation, error correction levels, and more. The actual techniques used are up to the manufacturer though, so effects may differ.
What is Thermal Recalibration?
Some parts of the hard disk are vulnerable to heat - enough to affect size or shape temporarily. Because of this, a common hard disk has built in Thermal Recalibration - where the drive checks itself to confirm and readjust measurements between sectors and tracks. This method takes time though - enough to cause slight pauses in operation. For data intensive tasks such as creating CD-ROM's or playing back video, thermal recalibration is not good, causing frames to be dropped or buffer underruns for CDRs.
The only way to really get around the problem is to use large data buffers (common for A/V drives), or bypass it and use completely different methods altogether.
What are Computer Viruses (virii)
Computer Viruses are malignant code and can come up in thousands of ways. The www.pcguide.com PCGuide sums it up best though: "... in order to be a virus, the program must have the ability to do all of the following:
Run without the user wanting it to and/or create effects that the programmer wants but that the user did not want or request.
Have the ability to "infect" or modify other files or disk structures.
Replicate itself so it can spread to other files or systems.
Most of the time, viruses are destructive - they attempt to damage or replace data on your hard disk. However, that is not the only example of a virus - they can also do something as simple as playing music through your speakers or writing a message on screen.
More to the point, Computer Viruses are always 'Executables', or files that do something only when the computer follows its instructions. Executables are one of the most commonly used files on any computer, but their only security measure is that *you* have to choose to run it, or choose to automatically let the computer run it, and so on. You cannot be infected by a virus by simply reading text, because nothing is executed when doing so.
There are several major types of viruses:
Trojan Horses: Aptly named because these are executables which do something the user does not want when used. They typically look like normal executables and when run have the ability to turn other files into Trojan horses.
Worms: Self contained executables. When run, they multiply and spread through networks independently, usually without affecting current data.
Droppers: Self contained Virus delivery systems. They are typically executables that contain data encrypted (scrambled) so that virus scanners cannot tell what is inside. Once they are used, droppers can install and run virus files.
'Macro' Viruses are related to Word Processing programs. 'Macros' are shortcuts - key combinations or scripts designed to do something for you. However, just like normal computer executables, macros are also run as such and can be just as vulnerable.
Viruses are usually structured to attack other executables or your disk's Boot Record. The Boot Record is code used every time the computer is started up, in order to relay disk information. Because it has to be loaded every time the computer is on, it makes for a very likely target. Bootable external disks are also vulnerable for the same reason.
The most recent form of virus attacks your computer's BIOS. Because most BIOS's are Flash ready (which means they can be overwritten with new information), a successful strike will erase it and leave your computer as unbootable.

DUAL BOOT ON 2 HARD DRIVES

Dual booting XP on 2 hard drives
________________________________________
Thought I'd post this in case someone else is trying to dual boot XP from a main and backup drive. Here's the scenario and what worked for me: When I upgraded my homemade computer a good while back, I installed a larger hard drive and a CD burner. Wanted to keep my existing older drive for backup purposes and the existing CD-ROM drive to save wear and tear on the burner drive. Since I only had 2 IDE drive connections on my mainboard, I installed a Promise Technologies add-in SCSI controller card with 2 more drive connections. Running my newer main hard drive on the first mainboard IDE connection and the CD-ROM drive on the second. Running the backup hard drive on the first SCSI card connection and the CD burner drive on the second one. (Arrived at this after having all kinds of XP boot problems with both hard drives on the mainboard IDE slots and the 2 CD drives on the add-in SCSI card slots.)

I used the hard drive utilities that came with the new drive to mirror copy the XP Home installation and everything else from the newer drive to the older one. Complete bootable backup of my new drive for ultimate crash protection. (Note that I had initially upgraded from Win98 to XP at the same time as adding the new hardware, so for activation puposes it "knew" I wasn't trying to steal it. No major hardware changes detected.) Only dilemma I've had all this time was that if I wanted to test boot from the backup drive, I had to disconnect the spare drive from the SCSI card, disconnect the main drive from my primary IDE connection on the mainboard, and connect the spare directly to the IDE connection.

After revisiting the issue recently, and wading through all the confusing info out there on the XP boot.ini lines, I found the solution. Please keep in mind that this works for MY configuration of drives as listed. In my case, there is only 1 partition on both the main and backup drives, and that is a major deal if you're trying this and have either drive with more than 1 partition.

After much twiddling with the boot.ini lines, and way too many reboots, I ended up with this:

[boot loader]
timeout=5
default=multi(0)disk(0)rdisk(0)partition(1)\WINDOWS
[operating systems]
multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="XP on primary drive" /fastdetect
multi(0)disk(0)rdisk(1)partition(1)\WINDOWS="XP on backup drive" /fastdetect
C:\CMDCONS\BOOTSECT.DAT="Microsoft Windows Recovery Console" /cmdcons

Explanation:

"timeout" is for wait time in seconds during boot screen to select the Windows version to boot to. Make it whatever you want.

"default" line is XP generic boot-to info, and points to my main C: drive installation. Nothing changed from original. LEAVE IT ALONE! (Overlook the fact that the "S" in "WINDOWS" is on a second line. Should not be. Some weird thing to do with this posting that I can't seem to edit differently.)

First line under "operating systems" beginning with "multi" points to the main C: drive XP installation (changed the wording to display on the boot screen to "XP on primary drive" for ease of use). DON'T CHANGE ANYTHING ELSE IN THIS LINE!

Second line under "operating systems" (THAT I HAD TO ADD) beginning with "multi" points to the XP installation on the D: backup drive connected to the SCSI add-in card. Note the "1" instead of "0" for the "rdisk" value. Made the wording for boot screen display "XP on backup drive". (Note that this applies in my case where the Promise Technologies SCSI add-in card has it's own BIOS that displays after the main initial BIOS, and detects the drives connected to it. I've read that if this is not the case, the "multi" might need to read "scsi". You're on your own with that.)

Note that if you have more than 1 partition on either drive in this scenario, you'd have to tell the "partition" value where to look for XP in the appropriate line corresponding to the hard drive.

Note also that the last line under "operating systems" is for the MS Recovery Console normally available for Windows repair when booting from the XP install disk. I have Recovery Console installed on my hard drive so I don't have to use the CD if needed. Link included at the end of this for how to do. Leave it out altogether if you have not done this.

When trying to boot to the backup drive with these boot.ini lines, it wouldn't boot when choosing the backup drive. Having to reboot and choose the primary drive to start. Final step was to change the boot device sequence in the BIOS. Ended up with first device as floppy, second device as IDE0 (main hard drive connected to mainboard), third device as SCSI (spare drive connected to add-in SCSI card). Voila!

Since I had just recently mirror copied the new drive to the old drive, everything on the desktop looked identical. Indications that I had indeed booted to the spare drive was a lack of hard drive indicator light activity on the front of my machine (connected to the mainboard and not to the SCSI card). Another indication was one of the XP services, Generic Host... asking for permission to go out through my firewall, and permission to do so had already been given from the C: drive. It worked! No more fiddling with the drive cables if spare drive booting needed! IMPORTANT: All the desktop and start menu shortcuts still pointed to C: drive, so even though I had booted to and was "running" on the spare D: drive, the programs still referenced and launched from the C: drive. Meaning that if a catastrophic failure of the C: drive, they wouldn't launch. Merely a matter of changing where each shortcut points to by clicking "Properties" for the shortcut and changing the target from C: to D: Cool. For a test I made a Notepad document and made sure I saved it to the D: drive under My Documents, made a desktop shorcut to it and made sure it pointed to it on the D: drive. Now when I boot from the spare drive, I can tell instantly that it's the one I'm running on. Not there when booting to the C: drive. Way cool.

Hard Disk

Disk Management
Disk Management is a snap-in that's part of the Microsoft Management Console supplied with Windows XP. If you're not familiar with Microsoft Management Console you might want to click here for an overview of how it functions. Just as the name Disk Management implies it's a tool used to manage system disks, both local and remote. If you've been around personal computers for a number of years you're familiar with Fdisk, the utility that was used in conjunction with the Format command to set up hard disks from the command prompt. Disk Management, with its graphical user interface, goes a long way to eliminating the need for the command prompt utilities and makes it easy to obtain a quick overview of the system and the relationships between installed disks.
Accessing Disk Management
There are a few different ways to access Disk Management. I'll list three different methods so choose whichever is more convenient.
● Method 1 - Start > Control Panel > Performance and Maintenance > Administrative Tools. Double click Computer Management and then click Disk Management in the left hand column.
● Method 2 - By default, Administrative Tools is not shown on the Start Menu but if you have modified the Start Menu (by right clicking the Start button and selecting Properties > Customize) so it is shown then just select Start > Administrative Tools > Computer Management and then click Disk Management in the left hand column.
● Method 3 - Click Start > Run and type diskmgmt.msc in the Open: line and click OK. The Disk Management snap-in will open.
Three Basic Areas of Disk Management
The basic Disk Management console is divided into three main areas and just about as straightforward as one can get. In Fig. 01 the areas are defined by green, red, and blue rectangles. The Console Tree is the tall vertical column on the left that's defined by the green color. If Method 3 above is used to open Disk Management it will open without the Console Tree being displayed. I suggest you get rid of the Console Tree as it really serves no purpose once Disk Management is open. Even if you used one of the other methods, the Console Tree can be eliminated by clicking the Show/Hide Console Tree icon (fourth from left) on the standard toolbar.
The red and blue areas are referred to as Top and Bottom and are both user definable via the View menu option. By default, the Top area displays the Volume List and the Bottom area displays the Graphical View. A third view called Disk List can be substituted in either pane if it's more to your liking, or the Bottom pane can be hidden completely. The View menu option also contains a [Settings...] option that allows adjustment of the color schemes, size of the drive displays and a few other options so the console can be tailored to individual taste.

Fig. 01
Basic Disk Management Functions
All too often the help documentation that's supplied with programs falls short of the mark, but in the case of Disk Management I think Microsoft did an above average job. I suggest giving it a thorough read through as it contains detailed instructions for performing many tasks that it's not immediately apparent Disk Management can handle. I'll list a few of the more common tasks that interest a wide cross section of users.
● Create partitions, logical drives, and volumes.
● Delete partitions, logical drives, and volumes.
● Format partitions and volumes.
● Mark partitions as active.
● Assign or modify drive letters for hard disk volumes, removable disk drives, and CD-ROM drives.
● Obtain a quick visual overview of the properties of all disks and volumes in the system.
● Create mounted drives on systems using the NTFS file system.
● Convert basic disks to dynamic disks.
● Convert dynamic to basic disks, although this is a destructive operation.
● On dynamic disks, create a number of specialty volumes including spanned, striped, mirrored, and RAID-5 volumes.
Disk Management makes extensive use of context menus. Right clicking on a drive or partition will normally present a menu that contains the options and procedures available for the particular device. The Action menu item is an alternate method for determining the same information. An advantage of using Disk Management is the majority of changes you can make don't require rebooting the system so you can continue working while the procedures complete.
At first glance it may appear there isn't much substance to Disk Management, but in truth it can be quite useful for many tasks. That's not to say it's without limitations because it does have some. One of the major limitations is the inability to resize a partition to make it smaller in a non-destructive manner. That limitation, and others, can be overcome by a number of third party utilities to fill in the gaps where Disk Management is lacking, but a full understanding of what Disk Management can and cannot do relative to your individual situation and needs will help you determine if a third party disk management utility is necessary.