Resource title

A Dynamic resource allocation policy in multi-project environments (RV of 2000/10/TM)

Resource image

image for OpenScout resource :: A Dynamic resource allocation policy in multi-project environments (RV of 2000/10/TM)

Resource description

The authors develop a dynamic prioritization policy to optimally allocate a scarce resource among K projects, only one of which can be worked on at a time. A project is represented by a Markov decision process, with states corresponding to the performance of the project output. Each project must go through a fixed number of stages. Payoffs accrue at the end of each project, depending on the quality of its output. In the absence of delay penalties, the problem is a "multi-armed bandit". When maximizing the expected payoff of the entire project portfolio, it is optimal to work on the project with the highest expected value. The presence of switching costs leaves the structure of the policy intact, while discouraging the scarce resource from changing projects mid-course unless the value from switching exceeds the switching cost. When delays cause payoff losses, the decomposition property of the multi-armed bandit problem is lost. In the resulting "restless bandit" problem, they are able to find the optimal policy if the lost payoff is a fraction of the potential payoff that increases with the delay. It is optimal to work on the project with the highest expected delay loss as if the other project were completely finished first. The structure of this policy remains valid if projects are subject to stochastic schedule delays: work on the project with the highest expected delay loss, including its expected schedule delay.

Resource author

Resource publisher

Resource publish date

Resource language

en

Resource content type

application/pdf

Resource resource URL

http://flora.insead.edu/fichiersti_wp/inseadwp2000/2000-64.pdf

Resource license

Copyright INSEAD. All rights reserved