Главная страница Случайная страница КАТЕГОРИИ: АвтомобилиАстрономияБиологияГеографияДом и садДругие языкиДругоеИнформатикаИсторияКультураЛитератураЛогикаМатематикаМедицинаМеталлургияМеханикаОбразованиеОхрана трудаПедагогикаПолитикаПравоПсихологияРелигияРиторикаСоциологияСпортСтроительствоТехнологияТуризмФизикаФилософияФинансыХимияЧерчениеЭкологияЭкономикаЭлектроника |
Functions
Read the following words and word combinations and use them for understanding and translation of the text:
memory management - управление памятью thread - поток (единица диспетчеризации в современных ОС) to allocate - распределять, выделять logical address - логический адрес, адрес в виртуальной памяти physical address - физический адрес mapping - преобразование, отображение to keep track - отслеживать contiguous - смежный, прилегающий, непрерывный partition - разделение fixed partition - статическое (фиксированное) распреде-ление памяти dynamic partition - динамическое распределение памяти first fit - метод первого подходящего best fit - метод наилучшей подкачки, наилучшее разме-щение frame - рамка, фрейм paging - подкачка (замещение) страниц, страничная орга-низация памяти demand paging - замещение страниц по запросу process management - управление процессом process control block (PCB) - блок управления процессом context switch - контекстный коммутатор, переключение в зависимости от контекста CPU scheduling - планирование (диспетчеризация) процес-сора non-preemptive scheduling - невытесняющее (бесприори-тетное) планирование preemptive scheduling - вытесняющее планирование full-fledged - полноценный coherence - согласованность, слаженность snooping - отслеживание deadlock - взаимоблокировка, зависание
While the architecture and features of operating systems differ considerably, there are general functions common to almost every system. The “core” functions include “booting” the system and initializing devices, process management (loading programs into memory assigning them a share of processing time), and allowing processes to communicate with the operating system or one another (kernel). Multiprogramming systems often implement not only processes (running programs) but also threads, or sections of code within programs that can be controlled separately. A memory management scheme is used to organize and address memory, handle requests to allocate memory, free up memory no longer being used, and rearrange memory to maximize the useful amount. In a multiprogramming environment, multiple programs are stored in main memory at the same time. Thus, operating systems must employ techniques to: - track where and how a program resides in memory, - convert logical program addresses into actual memory addresses. A logical address (sometimes called a virtual or relative address) is a value that specifies a generic location, relative to the program but not to the reality of main memory. A physical address is an actual address in the main memory device. When the program is eventually loaded into memory, each logical address finally corresponds to a specific physical address. The mapping of a logical address to a physical address is called address binding. Logical addresses allow a program to be moved around in memory or loaded in different places at different times. As long as we keep track of where the program is stored, we are always able to determine the physical address that corresponds to any given logical address. There are three techniques: - single contiguous memory management, - partition memory management, - paged memory management. Single contiguous memory management is the approach to memory management in which the entire application program is loaded into one continuous area of memory. Only one program other than the operating system can be processed at one time. The advantage of this approach is that it is simple to implement and manage. However, memory space and CPU time are almost certainly wasted. It is unlikely that an application program needs all of the memory not used by the operating system, and CPU time is wasted when the program has to wait for some resource. A more sophisticated approach - partition memory management - is to have more than one application program in memory at a time, sharing memory space and CPU time. Thus, memory must be divided into more than two partitions. There are two strategies that can be used to partition memory: fixed partitions and dynamic partitions. When using fixed partitions, main memory is divided into a particular number of partitions. The partitions do not have to be the same size, but their size is fixed when the operating system initially boots. The OS keeps a table of addresses at which each partition begins and the length of the partition. When using dynamic partitions, the partitions are created to fit the need of the programs. Main memory is initially viewed as one large empty partition. As programs are loaded, space is “carved out”, using only the space needed to accommodate the program and leaving a new, smaller empty partition, which may be used by another program later. The OS maintains a table of partition information, but in dynamic partitions the address information changes as programs come and go. At any point in time in both fixed and dynamic partitions, memory is divided into a set of partitions, some empty and some allocated to programs. Which partition should we allocate to a new program? There are three general approaches to partition selection: - First fit, in which the program is allocated to the first partition big enough to hold it - Best fit, in which the program is allocated to the smallest partition big enough to hold it - Worst fit, in which the program is allocated to the largest partition big enough to hold it. Worst fit does not make sense to use in fixed partitions because it would waste the larger partitions. First fit or best fit work for fixed partitions. But in dynamic partitions, worst fit often works best because it leaves the largest possible empty partition, which may accommodate another program later on. Partition memory management makes efficient use of main memory by having several programs in memory at one time. Paged memory management puts much more burden on the operating system to keep track of allocated memory and to resolve addresses. But the benefits gained by this approach are generally worth the extra effort. In the paged memory management, main memory is divided into small fixed-size blocks of storage called frames. A process is divided into pages that we assume are the same size as a frame. When a program is to be executed, the pages of the process are loaded into the various unused frames distributed through memory. Thus, the pages of a process may be scattered around, out of order, and mixed among the pages of other processes. To keep track of all this, the OS maintains a separate page-map table (PMT) for each process in memory; it maps each page to the frame in which it is loaded. The advantage of paging is that a process no longer needs to be stored contiguously in memory. The ability to divide a process into pieces changes the challenge of loading a process from finding one available large chunk of space to finding enough small chunks. An important extension to the idea of paged memory management is the idea of demand paging, which takes advantage of the fact that not all parts of a program actually have to be in memory at the same time. At any given instance in time, the CPU is accessing one page of a process. At that point, it does not really matter if the other pages of that process are even in memory. Process management. Another important resource that an operating system must manage is the use of the CPU by individual processes. Processes move through specific states as they are managed in a computer system. A process enters the system (the new state), is ready to be executed (the ready state), is executing (the running state), is waiting for a resource (the waiting state), or is finished (the terminated state). Note that many processes may be in the ready state or the waiting state at the same time, but only one process can be in the running state. While running, the process might be interrupted by the operating system to allow another process its chance on CPU. In that case, the process simply returns to the ready state. Or, a running process might request a resource that is not available or require I/O to retrieve a newly referenced part of the process, in which case it is moved to the waiting state. A running process finally gets enough CPU time to complete its processing and terminate normally. When a waiting process gets the resource it is waiting for, it moves to the ready state again. The OS must manage a large amount of data for each active process. Usually that data is stored in a data structure called a process control block (PCB ). Generally, each state is represented by a list of PCBs, one for each process in that state. When a process moves from one state to another, its corresponding PCB is moved from one state list to another in the operating system. A new PCB is created when a process is first created (the new state) and is kept around until the process terminates. The PCB stores a variety of information about the process, including the current value of the program counter, which indicates which instruction in the process is to be executed next. As the life cycle indicates, a process may be interrupted many times during its execution. Interrupts are handled by the operating system’s kernel. Interrupts may come from either the computer’s hardware or from the running program. At each point, its program counter must be stored so that the next time it gets into the running state it can pick up where it left off. The PCB also stores the values of all other CPU registers for that process. These registers contain the values for the currently executing process (the one in the running state). Each time a process is moved to the running state, the register values for the currently running process are stored into its PCB, and the register values of the new running state are loaded into the CPU. This exchange of register information, which occurs when one process is removed from the CPU and another takes its place, is called a context switch. PCB also maintains information about CPU scheduling. CPU scheduling is the act of determining which process in the ready state should be moved to the running state. There are two types of CPU scheduling: - non-preemptive scheduling, which occurs when the currently executing process gives up the CPU voluntarily (when a process switches from the running state to the waiting state, or when a program terminates); - preemptive scheduling, which occurs when the operating system decides to favor another process preempting the currently executing process. First-come, first-served CPU scheduling gives priority to the earliest arriving job. The-shortest-job-next algorithm gives priority to jobs with short running times. Round-robin scheduling rotates the CPU among active processes giving a little time to each. For many applications, a process needs exclusive access to not one resource, but several. Suppose, for example, two processes each want to record a scanned document on a CD. Process A requests permission to use the scanner and is granted it. Process B is programmed differently and requests the CD recorder first and is also granted it. Now A asks for the CD recorder, but the request is denied until B releases it. Unfortunately, instead of releasing the CD recorder B asks for the scanner. At this point both processes are blocked. This situation is called a deadlock. Deadlocks can occur both on hardware and software resources.
|