Social Sciences, asked by angela7840, 1 year ago

What is processor thrashing ? give example of two global scheduling algorithms that may lead to processor thrashing?

Answers

Answered by tanya1dlover
2
A particularly troublesome phenomenon, thrashing, may seriously interfere with the performance of paged memory systems, reducing computing giants (Multics, IBM System 360, and others not necessarily excepted) to computing dwarfs. The term thrashing denotes excessive overhead and severe performance degradation or collapse caused by too much paging. Thrashing inevitablyturns a shortage of memory space into a surplus of processor time. Performance of paged memory systems has not always met expectations. Consequently there are some who would have us dispense entirely with paging,! believing that programs do not generally display behavior favorable to operation in paged memories. We shall show that troubles with paged memory systems arise not from any misconception about program behavior, but rather from a lack of understanding of a three-way relationship among program behavior, paging algorithms, and the system hardware configuration (i.e., relative processor and memory capacities). We shall show that the prime cause of paging's poor performance is not unfavorable program behavior, but rather the large time required to access a page stored in auxiliary memory, together with a sometimes stubbon determination on the part of system designers to simulate large virtual memories by paging small real memories. After defining the computer system which serves as our context, we shall review the working set model for program behavior, this model being a useful vehicle for understanding the causes of thrashing. Then we shall show that the large values of secondary memory access times make a program's steady state processing efficiency so sensitive to the paging requirements of *Department of Electrical Engineering. The work reported herein, completed while the author was at Project MAC, was supported in part by Project MAC, an M.LT. research program sponsored by the Advanced Research Projects Agency, Department of Defense, under Office of Naval Research Contract No. Nonr-4102 (01). 915 other programs that the slightest attempt to overuse main memory can cause service efficiency to collapse. The solution is two-fold: first, to use a memory allocation strategy that insulates one program's memoryspace acquisitions from those of others; and second, to employ memory system organizations using a non-rotating device (such as slow-speed 'bulk core storage) between the high-speed main memory and the slowspeed rotating auxiliary memory. Preliminaries Figure 1 shows the basic two-level memory system in which we are interested. A set of identical processors has access to M pages of directly-addressable, multiprogrammed main memory; information not in main memory resides in auxiliary memory which has, for our purposes, infinite capacity. There is a time T, the traverse time, involved in moving a page between the levels of memory; T is measured from the moment a missing page is referenced until the moment the required page transfer is completed, and is therefore the expectation of a random variable composed of waits in queues, mechanical positioning delays, page transmission times, and so on. For simplicity, we assume T is the same irrespective of the direction a page is moved. Normally, the main memory is a core memory, though it could just as well be any other type of directly-addressable storage device. The auxiliary memory is usually a disk or drum but it could also be a combination of slow-speed core storage and disk or drum. We assume that information is moved into main memory only on demand (demand paging); that is, no attempt is made to move a page into main memory until some program references it. Information is returned from main to auxiliary memory at ~he discretion of the paging algorithm. The information movement across the channel bridging the two levels of memory is called page traffic. A process is a sequence of references (either fetches or 


 

tanya1dlover: CONTD..
tanya1dlover: From the collection of the Computer History Museum ) 916 Fall Joint Computer Conference, 1968 identical processors Traverse Time T MAIN (M pages)? page AUXILIARY (I) capacity) traffic FIGURE I-Basic two-level memory system stores) to a set of information called a program. We assume that each program has exactly one process associated with it. In this paper we are interested only in active processes. An active ,process may be in one of two states: the running state, in which it is
tanya1dlover: executing on a processor; or the page wait state, in which it is temporarily suspended awaiting the arrival of a page from auxiliary memory. We take the duration of the page wait state to be T, the traverse time. When talking about processes in execution, we need to distinguish between real time and virtual time. Virtual time is time seen by an active process, as if there were no page wait
tanya1dlover: interruptions. By definition, a process generates one information reference per unit virtual time. Real time is a succession of virtual time. intervals (i.e, computing intervals) and page wait intervals. A virtual time unit (vtu) is the time between two successive information references in a process, and is usually the memory cycle time of the computer system in which the process operates.
i can help with only this plz do mark me brainliest...@tanya1dlover
Answered by aditivishawkarma
0

Answer:

Explanation:

In a system, if a state is achieved in which all the system nodes are passing al Of their time transforming

processes without doung any fruitful work in an attempt to appropriately schedule the processes for better performance. This form of useless transformation of processes is called processor thrashing.

Load balancing algorithm and load sharing algorithm are the example of two global scheduling algorithm

Similar questions