We consider the problem of minimizing the sum of weighted completion times of jobs scheduled on unrelated parallel machines. That is, there are n jobs and m machines; job j takes p GH units of time if processed on machine i and has a weight w H . If C H is the completion time of job j, the objective
Approximation schemes for scheduling on parallel machines
โ Scribed by Noga Alon; Yossi Azar; Gerhard J. Woeginger; Tal Yadid
- Publisher
- Springer US
- Year
- 1998
- Tongue
- English
- Weight
- 124 KB
- Volume
- 1
- Category
- Article
- ISSN
- 1094-6136
No coin nor oath required. For personal study only.
โฆ Synopsis
We discuss scheduling problems with m identical machines and n jobs where each job has to be assigned to some machine. The goal is to optimize objective functions that solely depend on the machine completion times.
As a main result, we identify some conditions on the objective function, under which the resulting scheduling problems possess a polynomial-time approximation scheme. Our result contains, generalizes, improves, simplifies, and unifies many other results in this area in a natural way.
๐ SIMILAR VOLUMES
Most machine scheduling models assume that the machines are available all of the time. However, in most realistic situations, machines need to be maintained and hence may become unavailable during certain periods. In this paper, we study the problem of processing a set of n jobs on m parallel machin
This paper examines scheduling problems in which the setup phase of each operation needs to be attended by a single server, common for all jobs and different from the processing machines. The objective in each situation is to minimize the makespan. For the processing system consisting of two paralle
We discuss what we consider to be the 10 most vexing open questions in the area of polynomial time approximation algorithms for NP-hard deterministic machine scheduling problems. We summarize what is known on these problems, we discuss related results, and we provide pointers to the literature. Copy
A coarse-grain parallel program typically has one thread (task) per processor, whereas a finegrain program has one thread for each independent unit of work. Although there are several advantages to fine-grain parallelism, conventional wisdom is that coarse-grain parallelism is more efficient. This p