Resource title

Optimal project sequencing with recourse at a scarce resource

Resource image

image for OpenScout resource :: Optimal project sequencing with recourse at a scarce resource

Resource description

The authors develop a dynamic prioritization policy to optimally allocate a scarce resource among K projects, only one of which can be worked on at a time. A project is represented by a Markov decision process, with states corresponding to the performance of the project output. Equal and uniform delay penalties (discounting) correspond to a "multi-armed bandit" (MAB), a well-studied problem in the literature. When projects have differing delay costs, the problem (a.k.a. "restless bandit") has not been solved in general. The authors show that an intuitive and easily implementable policy is optimal if three special, but realistic in the context of NPD projects, assumptions hold: first, the delay cost is an increasing fraction of the payoff independent of the performance state. Second, costs are not discounted (or, discounting is dominated by delay costs). Third, projects are not abandoned based on their performance state during their processing at the scarce resource. The optimal policy is to work on the project with the highest expected delay loss as if the other project was completely finished first. These results extend the cµ rule of dynamic scheduling to non-linear delay costs and recourse during processing. If projects are subject to stochastic schedule delays, the policy is not in general optimal.

Resource author

Resource publisher

Resource publish date

Resource language


Resource content type


Resource resource URL

Resource license

Copyright INSEAD. All rights reserved