Once we have a specific route for a flow, it becomes possible to reserve resources along that route to make sure the needed capacity is available.
- Three different kinds of resources can potentially be reserved:
- Buffer space.
- CPU cycles.
- The first one, bandwidth, is the most obvious.
- If a flow requires 1 Mbps and the outgoing line has a capacity of 2 Mbps, trying to direct three flows through that line is not going to work.
- Thus, reserving bandwidth means not oversubscribing any output line.
- A second resource that is often in short supply is buffer space.
- When a packet arrives, it is usually deposited on the network interface card by the hardware itself.
- The router software then has to copy it to a buffer in RAM and queue that buffer for transmission on the chosen outgoing line.
- If no buffer is available, the packet has to be discarded since there is no place to put it.
- For a good quality of service, some buffers can be reserved for a specific flow so that flow does not have to compete for buffers with other flows.
- There will always be a buffer available when the flow needs one, up to some maximum.
- Finally, CPU cycles are also a scarce resource.
- It takes router CPU time to process a packet, so a router can process only a certain number of packets per second.
- Making sure that the CPU is not overloaded is needed to ensure timely processing of each packet.
- At first glance, it might appear that if it takes, say, 1 µsec to process a packet, a router can process 1 million packets/sec.
- This observation is not true because there will always be idle periods due to statistical fluctuations in the load.
- If the CPU needs every single cycle to get its work done, losing even a few cycles due to occasional idleness creates a backlog it can never get rid of.
- However, even with a load slightly below the theoretical capacity, queues can build up and delays can occur.
- Consider a situation in which packets arrive at random with a mean arrival rate of l packets/sec. The CPU time required by each one is also random, with a mean processing capacity of µ packets/sec.
- Under the assumption that both the arrival and service distributions are Poisson distributions, it can be proven using queuing theory that the mean delay experienced by a packet, T, is
- Where r = l/µ is the CPU utilization. The first factor, 1/µ, is what the service time would be in the absence of competition. The second factor is the slowdown due to competition with other flows.