Optimal rates for first-order stochastic convex optimization under Tsybakov noise condition

نویسندگان

  • Aaditya Ramdas
  • Aarti Singh
چکیده

We focus on the problem of minimizing a convex function f over a convex set S given T queries to a stochastic first order oracle. We argue that the complexity of convex minimization is only determined by the rate of growth of the function around its minimizer xf,S , as quantified by a Tsybakov-like noise condition. Specifically, we prove that if f grows at least as fast as ‖x − xf,S‖ around its minimum, for some κ > 1, then the optimal rate of learning f(xf,S) is Θ(T − κ 2κ−2 ). The classic rate Θ(1/ √ T ) for convex functions and Θ(1/T ) for strongly convex functions are special cases of our result for κ→∞ and κ = 2, and even faster rates are attained for κ < 2. We also derive tight bounds for the complexity of learning xf,S , where the optimal rate is Θ(T− 1 2κ−2 ). Interestingly, these precise rates for convex optimization also characterize the complexity of active learning and our results further strengthen the connections between the two fields, both of which rely on feedback-driven queries.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Optimal rates for stochastic convex optimization under Tsybakov noise condition

We focus on the problem of minimizing a convex function f over a convex set S given T queries to a stochastic first order oracle. We argue that the complexity of convex minimization is only determined by the rate of growth of the function around its minimizer xf,S , as quantified by a Tsybakov-like noise condition. Specifically, we prove that if f grows at least as fast as ‖x − xf,S‖ around its...

متن کامل

Asynchronous stochastic convex optimization: the noise is in the noise and SGD don't care

We show that asymptotically, completely asynchronous stochastic gradient procedures achieve optimal (even to constant factors) convergence rates for the solution of convex optimization problems under nearly the same conditions required for asymptotic optimality of standard stochastic gradient procedures. Roughly, the noise inherent to the stochastic approximation scheme dominates any noise from...

متن کامل

A Convex Formulation for Mixed Regression with Two Components: Minimax Optimal Rates

We consider the mixed regression problem with two components, under adversarial and stochastic noise. We give a convex optimization formulation that provably recovers the true solution, and provide upper bounds on the recovery errors for both arbitrary noise and stochastic noise settings. We also give matching minimax lower bounds (up to log factors), showing that under certain assumptions, our...

متن کامل

Asynchronous stochastic convex optimization

We show that asymptotically, completely asynchronous stochastic gradient procedures achieve optimal (even to constant factors) convergence rates for the solution of convex optimization problems under nearly the same conditions required for asymptotic optimality of standard stochastic gradient procedures. Roughly, the noise inherent to the stochastic approximation scheme dominates any noise from...

متن کامل

Beating the Minimax Rate of Active Learning with Prior Knowledge

Active learning refers to the learning protocol where the learner is allowed to choose a subset of instances for labeling. Previous studies have shown that, compared with passive learning, active learning is able to reduce the label complexity exponentially if the data are linearly separable or satisfy the Tsybakov noise condition with parameter κ = 1. In this paper, we propose a novel active l...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2012