Moment Convergence Rate in Stochastic Optimization
نویسنده
چکیده
where X is the decision set and Q0, the distribution of ξ, is a probability distribution supported on [0, 1]. This kind of problem can be solved numerically or (in special cases) in closed form if we know the exact distribution of Q0. Unfortunately, in practice one rarely knows the exact distribution Q0. Instead, one often has only partial information about Q0, which may include limited distributional information such as a few moments or quantiles. In a seminal paper, Scarf proposed and solved a “robust version” of Problem (1.1). More precisely, suppose that instead of knowing Q0 exactly, one only knew the first two moments of Q0, and wanted to optimize against an adversary who could pick a “worst-case” distribution subject to those moment constraints. Then one is faced with the min-max optimization problem
منابع مشابه
Effects of Probability Function on the Performance of Stochastic Programming
Stochastic programming is a valuable optimization tool where used when some or all of the design parameters of an optimization problem are defined by stochastic variables rather than by deterministic quantities. Depending on the nature of equations involved in the problem, a stochastic optimization problem is called a stochastic linear or nonlinear programming problem. In this paper,a stochasti...
متن کاملGenerating Moment Matching Scenarios Using Optimization Techniques
An optimization based method is proposed to generate moment matching scenarios for numerical integration and its use in stochastic programming. The main advantage of the method is its flexibility: it can generate scenarios matching any prescribed set of moments of the underlying distribution rather than matching all moments up to a certain order, and the distribution can be defined over an arbi...
متن کاملSample Average Approximation Method for Compound Stochastic Optimization Problems
The paper studies stochastic optimization (programming) problems with compound functions containing expectations and extreme values of other random functions as arguments. Compound functions arise in various applications. A typical example is a variance function of nonlinear outcomes. Other examples include stochastic minimax problems, econometric models with latent variables, multi-level and m...
متن کاملMixedGrad: An O(1/T) Convergence Rate Algorithm for Stochastic Smooth Optimization
It is well known that the optimal convergence rate for stochastic optimization of smooth functions is O(1/ √ T ), which is same as stochastic optimization of Lipschitz continuous convex functions. This is in contrast to optimizing smooth functions using full gradients, which yields a convergence rate of O(1/T ). In this work, we consider a new setup for optimizing smooth functions, termed as Mi...
متن کاملAccelerated gradient methods for nonconvex nonlinear and stochastic programming
In this paper, we generalize the well-known Nesterov’s accelerated gradient (AG) method, originally designed for convex smooth optimization, to solve nonconvex and possibly stochastic optimization problems. We demonstrate that by properly specifying the stepsize policy, the AG method exhibits the best known rate of convergence for solving general nonconvex smooth optimization problems by using ...
متن کاملOnline Stochastic Optimization with Multiple Objectives
In this paper we propose a general framework to characterize and solve the stochastic optimization problems with multiple objectives underlying many real world learning applications. We first propose a projection based algorithm which attains an O(T−1/3) convergence rate. Then, by leveraging on the theory of Lagrangian in constrained optimization, we devise a novel primal-dual stochastic approx...
متن کامل