HOME

Memory concept

Memory is central to the operation of a modern computer system memory is a large array of words or bytes,each with its own address. the CPU fetches instruction from memory according to the value of program counter. these instruction may cause additional loading from and storing to specific memory addresses. A typical instruction execution cycle, for ex.,will first fetch an instruction from memory.the instruction is then decoded and may cause operands tobe fetched from memory . after the instruction has been executed on the operands ,results may be stored back inthe memory. notice that the memory unit sees only a stream of memory addresses; it does not know how they are generated(the instruction counter , indexing , indirection, literal addresses,and so on)or what they are for (instruction or data). Accordingly , we can ignore how a memory add. is generated by a program. we are interested in only the sequence of memory addresses generated by the running program.

Address Binding

usually a program resides on a disk as a binary executable file. the program must be brought into memory and placed within a process for it to be executed. depending on the memory management in use,the process may be moved between disk and memory during its execution.the collection of processes on the disk that are waiting to be brought into memory for exexcution from the input queue. The normal procedure is to select one of the processes inthe input queue and to load that process into memory. as the process is executed, it accesses instruction and data from memory.eventually ,the process terminates,and its memory space is declared available . most systems allow a user processto reside in any part of the physical memory. thus,although the add. space of the computer starts at 00000,the first add. of the user process does not need to be 00000. this arrangement affects the addresses that the user program can use.in most cases, a user program will go through several steps (some of which may be optional) before being executed. add. may be represented in diff. ways during these steps. add. in thesource program are generally symbolic (such as count). a compiler will typically bind these symbolic addresses to rellocatable add.(such as "14 bytes from the begining of this module"). the linkage editor or loader will in turned bind these relocatable addresses to absolute add.(such as 74014).each binding is a mapping from one add. space to another.

classically, the binding of instruction and data to memory add. can be done at any step along the way:


compile time :

If it is known at compile time where the process will reside in memory,then absolute code can be generated .for ex., if it known priori that a user process resides starting at location R, then the generated compiler code will start at that location and extend up from there. if , at some later time , the starting location changes, then it will be neccessary to recompile this code.the ms dos.com format program are absolutecode bound at compile time.

Load Time :

If it is not known at compile time where the process will reside in memory then complier must generate relocatable code. In this case, final binding is delayed untial load time. If the starting address changes, we need only to re-load the user code to incorporate this changed value.

LOAD TIME:

If the process can be moved during its execution from one memory segment to another then binding must be delayed untill run time. Special hard ware must be available for this scheme to work.

Dynamic loading

To obtain better memory space utilization we can use dynamic loading with dynamic loading, a routine is not loaded until it is called. All routines are kept on disk in a relocatable load format. The main program is loaded into memory and is executed. When a routine needs to call another routine, the calling routine first checks to see whether the other routine has been loaded. If it has not been, the relocatable linking loder is called to load the desired routin into memory and to update the programes address tables to reflect this change. Then, control is passed to the newly loaded routine.

The advantages of dynamic loading is that an un used routine is never loaded. The scheme is particularly usefull when large amounts of codes are needed to handle in infrequently occuring cases such as error routine, in this case all though the total program size may be large, the portion that is actually used ( and hence actually loaded ) may be much smaller.

Dynamic loading does not require special support from the operating system. It is the responsibility of the user to design there program to take advantage off such a scheme. Operating systems may help the programer, however, by providing library routine to implement dynamic loading.

DYNAMIC LINKING

Most operating systems support only static linking , in which system laguage libraries are treated like any other object module are combined by the loader into the binary program image. The concept pf dynamic linking is similar to that of dynamic loading . rather than loadino being postphoned until execution time , linking is postponed. This feature is usually used with system libraries, such as languge subroutiene libraries. Without this facility , all programs on a system need to have a copy of their languge librsry ( or at least the routiene referd by the program) included in the executable image . this requirement wastes both disk space and main memory. With dynamic linking , a stub is included in the image for each library-routiene reference. This stub is small piece of code that indicates that how to locate the appropriate memory- resident library routiene, or how to load the library if the routiene is not already present. When this stub is executed, it checks to see whetherthe needed routiene is already in memory. If the routiene is not on memory, the program loads it into memory.either way, the stub replaces itself with the add. Of the routiene thus , next time that code segment is reached, the library routiene is executed directly ,incuring no cost for dynamic linking . under this scheme, all processes that use alamguge library execute only one copy of the library code.

This featurer can be extended to library updates (such as bug fixes) . a library may be replaced by a new version , and all porgram that reference the library will automatically use the new version . with out dynamic linking , all such programs would need to be relinked to gain access to the new library. So that program will not accidently execute new,incompatible versions of libraries, versions information is included in both the program and the library,. More than one version of a library may be loaded into memory ,and each program uses its version information to decide which copy of the librry to use. Minor changes retain the same version no. , where as major changes increment the version no. Thus ,only program that are compiled with the new library versions are affeted by the incopatible changes incorporated init .other programs linked before the new library wasinstalled will continue using the older library . this system is also known as shared libraries.

OVERLAYS

In our so ,far our program and data of a process must be in physical memory to for the process to execute. The size of a process is limited to the size of physical memory. So that a process can be a larger than the amount of memory allocated to it , a technique called overlays is some times used. The idea of overlays is to keep in memory only those instruction and data that are needed at any given time when other intruction are needed, they are loaded into space that was occupied previosly by instruction that are no longer needed.

LOGICAL VS PHYSICAL ADDRESS

An address generated by the CPU is commonly refered to as a logical address , where as an address seen by the memory unit (i.e. , the one loaded into memory address register of memory) is commonly refered to as a physical address. The run time mapping from virtual to physical addresses is done by the memory- management unit (mmu),which is a hardware device. The base register is now called a relocation register. The value in the relocation register is added to every add. Generated by a user process at the time it is sent to memory . the MS DOS operating system on the intel 8086 family of processors uses four relocation register when loading and running process. Notice that the user program never sees the physical addresses. The concept of the logical address space that is bound to a seperate physical address space is central to proper memory management.

swapping

A process need to be in memory to be executed. A process however, can be swapped temporarily out of memory to a backing store, and then borught back into memory for cotinued execution, for ex. Multiple programing with rond robin cpu scheduling algorithm.

When each process finishes its quantum,it will be swaped with another process. ideally the memory manager can swape processes fast enough that thereare always processes in memory, ready to execute, when the CPU scheduler wants to reschedule the CPU. the quantum must also be sufficiently large that reasonable amounts of computing are done between swaps.

A variant of this swaping policy is used for priority-based scheduling alogrithm. if a higher priority process arrives and wants service, the memory manager can swape out the lower- priority process so that it can load and execute the higher- priority process. when the higher priority process finishes, the lower priority-process can be swaped back in and continued. this varient of swaping is sometimes called roll-out, roll-in.

Normally a process that is swaped out will be swaped back into the same memory space that it occupied previously. this restriction is dictated by the mehtod of address binding. if binding is done at assembly or load time, then the process can not be moved to different locations.if execution- time binding is being used, then it is possible to swap a process into a different memory space, because the physical addresses are computed during execution time.

Swapping requires a backing store. the backing store is commonly a fast disk. it must be large enough to accommodate copies of all memory images for all users, and must provide direct access to these memory images. the system maintains a ready queue consistingof all process whose memory images are on the backing store or in memory and are ready to run .

Currently, standard swapping is used in few systems. it requires too much swapping time and provides too little execution time to be a resonable memory- managment solution. mofied versions of swapping, however, are found in many systems.

A modification of swapping is used in many vesrsions of UNIX. swapping was normally disabled, but would start if many processes were running and were using a threshold amount of memory. swapping would againbe halted if the load on the system was reduced.

Any swaped -out process remains swaped out (and not executing )until the user selects that process to run. the follow on microsoft operating system,Windows/NT, takes advantage of advanced MMU features now found even on PCs.

paging

Another possible solution to the external fragmentation problem is to permit the physical address space of a process to be noncontiguous, thus allowing a process to be alocated physical memory wherever the latter is available. one way of implementing this solution is through the use of a paging scheme. paging avoids the conciderable problem of fitting the varying-sizedmemory chunksonto the backing sore, from which most of the previous memory- managment schemes surrered. when some code fragments or data residing in main memory need to be swapped out, space

must be found on the backing store. the fragmentation problems discussed in connection with main memory are also prevalent with backing store, except that access is much slower, so compaction is impossible. because of its advantages over the previous methods, paging in its various forms is commonly used in many operating systems.

FRAGMENTATION

Physical memory is broken into fixed- sized blocks called frames. logical memory is also broken into blocks of the same size called pages. when a process is to be excuted, its pages are loaded into any available memory frames from the backing store. the backing store is divided into fixed sized blocks that are of the same size as the memory frames. SEGMENTATION

An important aspect of memory managment that becomes unavoidable with paging is the separation of the users view of memory and the actual physical memory. the user's view of memory is not the same as the actual physical memory. the user's view is mapped into physical memory. the mapping allows differentiation between logical memory physical memory.

BASIC METHOD

What is the user's view of memory? does the user think of memory as a linear array of bytes, some containing instructions and others containing data, or is there some other preferred memory view? there is general agreement that the user of programmer of a system does not think of memory as a linear array of bytes. rather the user prefers to view memory as a collection of variable -sized segments, with no necessary ordering among segments.

consider how you think of it as a main program with a set of subroutines, procedures, fuctions, or modules. there may also be various data structures;tables, arrays, stacks variables and so on. each of these modules or data elements is refered to by name. you talk about "the symbol table" ,"function sqrt", "the main program" without caring what addresses in memory these elements occupy.you are not concerned with whether the symbol table is stored before or after the sqrt function.each of these segments is of variable length; the length is intransically defined by the purpose of the segment in the program.elements withina segment are identified by there offset from the begining of the segment: the first statement of the program, the 17th entry inthe symbol table ,the 5th instruction of the sqrt function , and so on.

segmentation is a memory management scheme that supports this user view of memory.A logical address space is a collection of segments.each segment has a name and a length . add. specify both the segment name and the offset within the segment. the user there fore specifies each add. by two quatities: a segment name and an offset.(contrast this scheme with paging the, user specified only a single add., whch was partitioned by the hardware into a page no.and an offset, all invisible to the programer). For simplicity of implementation ,segments are numbered and are refered to by a segment no.,rather then by a segment name. thus ,a logical add. consist of a two tuple:


< segment - no., offset>.

Normally, the user program is compiled, and the compiler automatically costrucuts segments reflecting the input program.a pascal compiler might create seperate segments for

The global variables; The procedure call stack, to store parameters and return addresses; The code portion of each procedure or function; The local variables of each procedure and function.

A fortran compiler might create a sperate segment for each common block. arrays might be assigned seperate segments. the loader would take all these segment and assign them segment no.

FRAGMENTATION

The long - term schedueler must find and alocate memory for all the segments of a user program. this situation is similar to paging except that the segments are off variable length: pages are all the same size. thus, as with the variable - sized partition scheme, memory allocaton is a dynamic storage - alocation problem,usually solved with a best fit or first fit algorithm. Segmentation may then cause external fragmentation,when all blocks free memory are to small to accomodate a segment.inthis case, the process may simply have to wait untill more memory( or atleast a larger hole)becomes available , or compaction may be used to create a larger hole. Because segmentation is by its nature a dynamic relocation algorithm,we can compact memory whenever we want. if the CPU schedueler must wait for one process , due to a memory - alocation problem,it may(or may not) skip through the cpu queue looking for a smaller,lower- priority process to run.

SEGMENTATION WITH PAGING

Both paging and segmentation have there advantages and disadvantages. in fact,of the two most popular micro processor now being used,the motorola 68000 line is designed based on a flat address space, where as the intel 8086 family is based on segmentation . both are merging memory models toward a mixture of paging and segmentation.

It is possible to combine these two schemes to improve on each.this combination is best illustrated by two differrent by architechtures -- the innovative but not widely used MULTICS system and the intel 386

MULTICS

In the multics system ,a logical add. is formed from an 18- bit seg ments no.and a 16- bit offset. although this scheme creates a 34 bit address space,the segment table overhead is tolerable : we need only as many segment table entries as we have segments ,as there need not be empty segment table entries .