Sample Complexity of Smooth Stochastic Optimization∗

نویسندگان

  • Benjamin Armbruster
  • Hernan P. Awad
  • Shipra Agrawal
  • Yinyu Ye
  • Margaret L. Brandeau
  • Gerd Infanger
  • Jiawei Zhang
چکیده

Let N( , δ) be the number of samples needed when solving a stochastic program such that the objective function evaluated at the sample optimizer is within of the true optimum with probability 1− δ. Previous results are of the form N( , δ) = O( −2 log δ−1). However, a smooth objective function is often locally quadratic at an interior optimum. For that case we use results on the convergence of the sample optimizers, to show that N( , δ) = O( −1 log δ−1). These results are both bounds and asymptotics. Hence we show for a common case (smooth objective functions with an interior optimum), that the number of samples needed is O( −1). This suggests that stochastic optimization is a practical approach for such problems.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

An optimal method for stochastic composite optimization

This paper considers an important class of convex programming (CP) problems, namely, the stochastic composite optimization (SCO), whose objective function is given by the summation of general nonsmooth and smooth stochastic components. Since SCO covers non-smooth, smooth and stochastic CP as certain special cases, a valid lower bound on the rate of convergence for solving these problems is know...

متن کامل

Conditional gradient type methods for composite nonlinear and stochastic optimization

In this paper, we present a conditional gradient type (CGT) method for solving a class of composite optimization problems where the objective function consists of a (weakly) smooth term and a strongly convex term. While including this strongly convex term in the subproblems of the classical conditional gradient (CG) method improves its convergence rate for solving strongly convex problems, it d...

متن کامل

Passive Learning with Target Risk

In this paper we consider learning in passive setting but with a slight modification. We assume that the target expected loss, also referred to as target risk, is provided in advance for learner as prior knowledge. Unlike most studies in the learning theory that only incorporate the prior knowledge into the generalization bounds, we are able to explicitly utilize the target risk in the learning...

متن کامل

A Stochastic Smoothing Algorithm for Semidefinite Programming

We use rank one Gaussian perturbations to derive a smooth stochastic approximation of the maximum eigenvalue function. We then combine this smoothing result with an optimal smooth stochastic optimization algorithm to produce an efficient method for solving maximum eigenvalue minimization problems, and detail a variant of this stochastic algorithm with monotonic line search. Overall, compared to...

متن کامل

Stochastic Variance-Reduced Hamilton Monte Carlo Methods

We propose a fast stochastic Hamilton Monte Carlo (HMC) method, for sampling from a smooth and strongly log-concave distribution. At the core of our proposed method is a variance reduction technique inspired by the recent advance in stochastic optimization. We show that, to achieve accuracy in 2-Wasserstein distance, our algorithm achieves Õ ( n+ κd/ + κdn/ 2/3 ) gradient complexity (i.e., numb...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2006