Flow diagram of task states
Mechanisms for synchronization must be able to delay task execution. Synchronization imposes an order of execution on tasks that is enforced with these delays. To understand what happens to tasks through their lifetimes, we must consider how task execution is controlled. Regardless of whether a machine has a single processor or more than one, there is always the possibility of there being more tasks than there are processors. A run-time system program called a scheduler manages the sharing of processors among the tasks.
If there were never any interruptions and tasks all had the same priority, the scheduler could simply give each task a time slice, such as 0.1 second, and when
a task’s turn came, the scheduler could let it execute on a processor for that amount of time. Of course, there are several events that complicate this, for example, task delays for synchronization and for input or output operations. Because input and output operations are very slow relative to the processor’s speed, a task is not allowed to keep a processor while it waits for completion of such an operation.
Tasks can be in several different states:
1. New: A task is in the new state when it has been created but has not yet begun its execution.
2. Ready: A ready task is ready to run but is not currently running. Either it has not been given processor time by the scheduler, or it had run previously but was blocked in one of the ways described in Paragraph 4 of this subsection. Tasks that are ready to run are stored in a queue that is often called the task ready queue.
3. Running: A running task is one that is currently executing; that is, it has a processor and its code is being executed.
4. Blocked: A task that is blocked has been running, but that execution was interrupted by one of several different events, the most common of which is an input or output operation. In addition to input and output, some languages provide operations for the user program to specify that a task is to be blocked.
5. Dead: A dead task is no longer active in any sense. A task dies when its execution is completed or it is explicitly killed by the program.
A flow diagram of the states of a task is shown in Figure 13.2.
One important issue in task execution is the following: How is a ready task chosen to move to the running state when the task currently running has become blocked or whose time slice has expired? Several different algorithms have been used for this choice, some based on specifiable priority levels. The algorithm that does the choosing is implemented in the scheduler.
Associated with the concurrent execution of tasks and the use of shared resources is the concept of liveness. In the environment of sequential programs, a program has the liveness characteristic if it continues to execute, eventually leading to completion. In more general terms, liveness means that if some event—say, program completion—is supposed to occur, it will occur, eventually. That is, progress is continually made. In a concurrent environment and with shared resources, the liveness of a task can cease to exist, meaning that the program cannot continue and thus will never terminate.
For example, suppose task A and task B both need the shared resources X and Y to complete their work. Furthermore, suppose that task A gains possession
of X and task B gains possession of Y. After some execution, task A needs resource Y to continue, so it requests Y but must wait until B releases it. Likewise,
task B requests X but must wait until A releases it. Neither relinquishes the resource it possesses, and as a result, both lose their liveness, guaranteeing that execution of the program will never complete normally. This particular kind of loss of liveness is called deadlock. Deadlock is a serious threat to the reliability of a program, and therefore its avoidance demands serious consideration in both language and program design.