Lecture for Chapter 5.1 (Fall 13)

Chapter 5 Distributed Process
5.1 A System Performance Model
--Niharika Muriki
• Need for Scheduling
• Process Interaction Models
• System Performance Model
• Efficiency Loss
• Distribution of Workload
• Comparison of Performance for Workload Sharing
• Latest Relevant Applications
• Future Work
• References
• As we have numerous number of processes running in parallel,
scheduling these process plays a major role.
• Before execution, processes need to scheduled and allocated with
required resources.
• Results of Scheduling:
Enhance overall system performance
Process completion time is minimized
Processor utilization is enhanced
Helps in achieving location and performance transparency in
distributed systems.
Issues of Process Scheduling
Process scheduling in distributed systems touches upon several
practical considerations that are often omitted in the traditional
multiprocessor scheduling.
In distributed systems,
• Communication overhead is non-negligible.
• Effect of the underlying architecture cannot be ignored.
• Dynamic behavior of the system must be addressed.
Process Interaction Models
Based on the differences in interactions between processes, we
have 3 types of process interaction models namely,
• Precedence process model
• Communication process model
• Disjoint process model
Process Interaction Models
We have depicted the differences in interactions between processes using a simple
example of a program computation consisting of four processes mapped to a twoprocessor multiple computer system.
Precedence Process Model
• Processes are represented by a
Directed Acyclic Graph (DAG).
• May incur communication
• This model is best applied to the
concurrent processes.
• Use: Minimize the total completion
time of the task.
Total Completion Time= Computation Time + Communication time
Communication Process Model
• Processes communicate asynchronously.
• Optimize the total cost of communication and computation.
• The task is partitioned in such a way that minimizes the inter
processor communication and computation costs of
processes on processors.
Disjoint Process Model
• Process interaction is implicit.
• Processors utilization is maximized and turnaround time of the
processes is minimized.
• Partitioning a task into multiple processes for execution can
result in a speedup of the total task completion time.
System Performance Model
Speedup is a function of
• Algorithm design
• Underlying system architecture.
• Efficiency of the scheduling algorithm .
System Performance Model
• S can also be written as :
• OSPT (optimal sequential processing time): the best time that can be achieved on a single
processor using the best sequential algorithm
• CPT (concurrent processing time): the actual time achieved on a n-processor system with
the concurrent algorithm and a specific scheduling method being considered
• OCPTideal (optimal concurrent processing time on an ideal system): the best time that can
achieved with the concurrent algorithm being considered on an ideal n-processor
system(no inter-communication overhead) and scheduled by an optimal scheduling policy
• Si: the ideal speedup by using a multiple processor system over the best sequential time
• Sd: the degradation of the system due to actual implementation compared to an ideal
System Performance Model
Si can be further derived as,
n=number of processors.
m=number of tasks in the algorithm.
RP=Relative Processing requirement.
(RP  1)
RC=Relative Concurrency. RC=1 
best use of the processors
System Performance Model
Sd can be rewritten as
 ---the efficiency less
the ratio of the real system overhead due to
all causes to the ideal optimal processing
Two parts: sched + syst
Finally we can get
Efficiency loss
Multiple computer
Scheduling policy
• Efficiency loss can be expressed as:
 
CPT ( X , Y ' )  OCPT
Ideal system
CPT ( X , Y ' )  CPT ideal (Y )
CPT ideal (Y )  OCPT
  syst   sched
 
CPT ( X , Z )  OCPT
Non-Ideal system
CPT ( X , Z )  OCPT ( X )
  sched   syst
Efficiency loss
Following figure demonstrates the decomposition of efficiency loss due to
scheduling and system communication.
The significance of the impact of communication on system performance
must be carefully addressed in the design of distributed scheduling
Workload Distribution
• Load sharing: static workload distribution
• Dispatch processes to the idle processors statically upon
• Corresponding to processor pool model
• Load balancing: dynamic workload distribution
• Migrate processes dynamically from heavily loaded
processors to lightly loaded processors
• Corresponding to migration workstation model
Workload Distribution
• Model by queuing theory: X/Y/c
• An arrival process X, a service time distribution of
Y, and c servers.
• : arrival rate;
: service rate;
: migration rate
• : depends on channel bandwidth, migration
protocol, context and state information of the
process being transferred.
Processor-Pool and Workstation Queuing Models
Static Load Sharing
*M for Markovian distribution
Dynamic Load Balancing
=0 M/M/1
Latest Relevant Application
• In the situation where there are multiple users or a
networked computer system, you probably share a printer
with other users. When you request to print a file, your
request is added to the print queue. When your request
reaches the front of the print queue, your file is printed.
This ensures that only one person at a time has access to
the printer and that this access is given on a first-come,
first-served basis.
Latest Relevant Examples
• When you phone the toll-free number for your bank or any
other customer service you may get a recording that says,
"Thank you for calling XYZ Bank. Your call will be answered by
the next available operator. Please wait." This is a queuing
• Vehicles on toll-tax bridge: The vehicle that comes first to the
toll tax booth leaves the booth first. The vehicle that comes
last leaves last. Therefore, it follows first-in-first-out (FIFO)
strategy of queue.
Future Work
• Distributed flow scheduling in an unknown environment[3]
Flow scheduling is crucial in the next-generation network but
hard to address due to fast changing link states and tremendous
cost to explore the global structure.
• Pareto-Optimal Cloud Bursting[4]
Large-scale Bag-of-Tasks (BoT) applications are characterized by
their massively parallel, yet independent operations. The use of
resources in public clouds to dynamically expand the capacity of
a private computer system might be an appealing alternative to
cope with such massive parallelism. To fully realize the benefit of
this 'cloud bursting', the performance to cost ratio (or cost
efficiency) must be thoroughly studied and incorporated
into scheduling and resource allocation strategies.
[1] Randy Chow, Theodore Johnson, Distributed Operating Systems &
Algorithms, 1997
[2] 5 real life instances where queue operations are being used
[3] Yaoqing Yang ., Kegin Liu, & Pingyi Fan, Distributed flow scheduling in an
unknown environment.
[4] M. Reza HoseinyFarahabady, Young Choon Lee, Albert Y. Zomaya, "ParetoOptimal Cloud Bursting," IEEE Transactions on Parallel and Distributed
Systems, 27 Aug. 2013. IEEE computer Society Digital Library. IEEE Computer
Society, http://doi.ieeecomputersociety.org/10.1109/TPDS.2013.218
Thank You

similar documents