When you think about it , it ’s awesome how many different types of electronic memory you encounter in day-after-day life . Many of them have become an intact part of our vocabulary :
You already know that the computer in front of you has computer memory . What you may not make love is that most of the electronic items you use every day have some signifier of memory also . Here are just a few examples of the many items that apply memory board :
In this clause , you ’ll get a line why there are so many unlike types of memory and what all of the terms entail . On the next Sir Frederick Handley Page , let ’s lead off with the basics : What exactly does computer memory do ?
Computer Memory Basics
Although memory is technically any form of electronic storage , it is used most often to identify tight , temporary shape of storage . If your computer’sCPUhad to constantly access thehard driveto retrieve every piece of data point it involve , it would maneuver very easy . When the information is maintain in retentiveness , the CPU can access it much more promptly . Most manikin of memory are intended to store data point temporarily .
The CPU accesses retentivity according to a distinct power structure . Whether it comes from lasting memory board ( the grueling drive ) or input ( thekeyboard ) , most data goes inrandom access memory(RAM ) first . The central processor then stack away pieces of data it will need to access , often in acache , and maintains sealed peculiar teaching in theregister . We ’ll talk about cache and registers later .
All of the components in your computer , such as the CPU , the heavy drive and theoperating system , work together as a squad , and memory is one of the most essential parts of this team . From the here and now you turn your data processor on until the time you shut it down , your CPU is perpetually using memory . Let ’s take a face at a typical scenario :
In the list above , every time something is loaded or opened , it is target into RAM . This simply means that it has been put in the computer’stemporary storage areaso that the central processor can access that information more easily . The C.P.U. requests the data it need from RAM , litigate it and write new information back to RAM in acontinuous cycle . In most computers , this make of data between the CPU and RAM happens millions of times every second . When an coating is close , it and any accompanying file are usuallypurged(deleted ) from RAM to make way for unexampled information . If the changed data file are not hold open to a permanent storage gimmick before being regurgitate , they are lost .
One common question about screen background computers that comes up all the time is , " Why does a figurer involve so many store system ? "
Types of Computer Memory
A distinctive information processing system has :
Why so many ? The answer to this dubiousness can learn you a luck about memory !
flying , brawny CPUs need prompt and sluttish access to large amounts of data in edict to maximize their public presentation . If the central processor can not get to the data it needs , it literally bar and waits for it . advanced CPUs run at speeds of about1 gigahertzcan consume monolithic amounts of data point – potentially billion ofbytesper second . The job that computer designers face is that computer memory that can keep up with a 1 - gigahertz CPU is extremelyexpensive– much more expensive than anyone can afford in gravid quantities .
estimator designers have solved the cost problem by " tiering " memory – using expensive store in small quantity and then back it up with larger quantities of less expensive memory .
The cheapest form of read / write memory board in wide of the mark utilisation today is thehard disk . grueling disks provide large quantities of cheap , permanent store . you may purchase hard disk infinite for pennies per M , but it can take a dependable number of meter ( approaching a indorsement ) to study a megabyte off a surd saucer . Because storage blank space on a laborious disc is so cheap and plentiful , it constitute the last stage of a CPUs memory power structure , calledvirtual computer storage .
The next level of the hierarchy isRAM . We discuss RAM in detail inHow RAM Works , but several points about RAM are important here .
Thebit sizeof a CPU tell you how many byte of information it can get at from RAM at the same time . For instance , a 16 - piece C.P.U. can process 2 bytes at a time ( 1 byte = 8 spot , so 16 bit = 2 bytes ) , and a 64 - turn mainframe can process 8 byte at a time .
Megahertz(MHz ) is a measure of a CPU ’s processing speed , orclock cps , in one thousand thousand per second . So , a 32 - minute 800 - Mc Pentium III can potentially process 4 bytes at the same time , 800 million times per second ( possibly more based on pipelining ) ! The end of the memory board system is to adjoin those requirements .
A information processing system ’s system RAM alone is not fast enough to match the velocity of the CPU . That is why you need acache(discussed later ) . However , the fast RAM is , the better . Most chips today manoeuvre with a bicycle charge per unit of 50 to 70 nanoseconds . The read / write upper is typically a use of the character of RAM used , such as DRAM , SDRAM , RAMBUS . We will talk about these various type of memory later .
First , have ’s talk about system RAM .
System RAM
System random access memory speed is controlled bybus widthandbus speed . Bus width refers to the number of minute that can be sent to the central processing unit simultaneously , and bus speed refers to the routine of time a group of bit can be sent each 2d . Abus cycleoccurs every metre data travels from memory to the CPU . For example , a 100 - megacycle 32 - bit bus is theoretically capable of post 4 bytes ( 32 bits divided by 8 = 4 bytes ) of data to the CPU 100 million multiplication per second , while a 66 - MHz 16 - bit charabanc can mail 2 bytes of information 66 million times per irregular . If you do the math , you ’ll find that just changing the coach breadth from 16 bits to 32 bits and the speed from 66 MHz to 100 MHz in our example allow for three fourth dimension as much data ( 400 million byte versus 132 million bytes ) to pass through to the CPU every second .
In realness , RAM does n’t usually operate at optimum upper . Latencychanges the equality radically . Latency pertain to the number of clock cycles needed to read a flake of information . For example , RAM rated at 100 megacycle is capable of ship a spot in 0.00000001 seconds , but may take 0.00000005 seconds to start the read cognitive process for the first bit . To overcompensate for latent period , CPUs use a special technique calledburst modality .
Burst mode look on the expectation that datum quest by the CPU will be salt away insequential memory prison cell . The memory comptroller anticipates that whatever the CPU is working on will continue to come from this same series of storage addresses , so it reads several serial bits of information together . This means that only the first bit is subject to the full event of response time ; take successive bits takes significantly less fourth dimension . Therated outburst modeof memory is commonly state as four numbers assort by dashes . The first number tells you the number of clock oscillation needed to begin a read operation ; the second , third and quaternary telephone number tell you how many hertz are ask to read each straight bit in the row , also known as thewordline . For example : 5 - 1 - 1 - 1 tells you that it ask five cycles to interpret the first bit and one oscillation for each bit after that . evidently , the lower these numbers are , the well the performance of the computer memory .
Burst manner is often used in connective withpipelining , another means of minimizing the effects of latency . Pipelining orchestrate data retrieval into a form of assembly - air mental process . The memory controller at the same time translate one or more run-in from retention , sends the current word or words to the central processor and drop a line one or more words to memory cells . Used together , burst mode and pipelining can dramatically reduce the lag triggered by rotational latency .
So why would n’t you buy the fastest , spacious retentivity you could get ? The velocity and width of the memory ’s bus should gibe the system of rules ’s bus . you’re able to use memory project to work at 100 Mc in a 66 - MHz system of rules , but it will run at the 66 - MHz pep pill of the jitney so there is no vantage , and 32 - second computer memory wo n’t fit on a 16 - turn bus .
Even with a wide and fast bus , it still lease longer for datum to get from the memory card to the processor than it takes for the CPU to in reality process the data . That ’s where caches come in in .
Cache and Registers
Cachesare designed to alleviate this bottleneck by making the data used most often by the CPU instantly useable . This is achieve by work up a small amount of memory , know asprimaryorlevel 1cache , in good order into the CPU . floor 1 stash is very belittled , unremarkably stray between 2 K ( KB ) and 64 KB .
Thesecondaryorlevel 2cache typically resides on a memory add-in located near the CPU . The spirit level 2 stash has a direct connection to the CPU . A dedicated integrate lap on themotherboard , theL2 accountant , regulates the consumption of the level 2 cache by the CPU . calculate on the CPU , the size of the level 2 hoard vagabond from 256 KB to 2 megabyte ( MiB ) . In most systems , data require by the CPU is get at from the hoard more or less 95 pct of the time , greatly reducing the command processing overhead need when the CPU has to waitress for data point from the main retentivity .
Some inexpensive systems dispense with the level 2 cache all told . Many high performance central processing unit now have the level 2 stash actually build into the CPU chip itself . Therefore , the size of the horizontal surface 2 memory cache and whether it isonboard(on the CPU ) is a major determine cistron in the carrying into action of a CPU . For more details on hoard , seeHow Caching Works .
A finicky case ofRAM , unchanging random memory access memory(SRAM ) , is used primarily for memory cache . SRAM use multiple transistor , typically four to six , for each memory board cell . It has an externalgatearray known as abistable multivibratorthat switches , orflip - flops , between two states . This means that it does not have to be continually refreshed like DRAM . Each cell will preserve its datum as long as it has power . Without the motivation for never-ending refreshing , SRAM can operate extremely quickly . But the complexity of each cell make it prohibitively expensive for use as standard RAM .
The SRAM in the cache can beasynchronousorsynchronous . synchronic SRAM is design to exactly match the speed of the CPU , while asynchronous is not . That petty minute of timing stimulate a difference in performance . Matching the CPU ’s clock velocity is a good thing , so always look for synchronized SRAM . ( For more info on the various types of RAM , seeHow RAM Works . )
The final step in memory is theregisters . These are memory cells built powerful into the CPU that contain specific data point needed by the CPU , specially thearithmetic and logic unit(ALU ) . An built-in part of the CPU itself , they are moderate directly by the compiler that sends info for the CPU to process . SeeHow Microprocessors Workfor contingent on registers .
For a handy printable guide to computer memory board , you’re able to impress the HowStuffWorksBig List of Computer Memory Terms .
For more information on computing equipment remembering and related topics , check out the connection on the next page .