Optimal inequalities in probability theory: a convex optimization approach

The authors address the problem of deriving optimal inequalities for P (X Î S), for a multivariate random variable X that has a given collection of moments, and S is an arbitrary set. Their goal in this paper is twofold: First, to present the beautiful interplay of probability and optimization related to moment inequalities, from a modern, optimization based, perspective. Second, to understand the complexity of deriving tight moment inequalities, search for efficient algorithms in a general framework, and, when possible, derive simple closed-form bounds. For the univariate case the authors provide an optimal inequality for P (X Î S) for a single random variable X, when its first k moments are known, as a solution of a semidefinite optimization problem in k +1 dimensions. They generalize to multivariate settings the classical Markov and Chebyshev inequalities, when moments up to second order are known, and the set S is convex. They finally provide a sharp characterization of the complexity of finding optimal bounds, i.e., a polynomial time algorithm when moments up to second order are known and the domain of X is Rn , and a NP-hardness proof when moments of third or higher order are given, or if moments of second order are given and the domain of X is Rn.

en

application/pdf

http://flora.insead.edu/fichiersti_wp/inseadwp2000/2000-28.pdf

Copyright INSEAD. All rights reserved