Accelerated randomized stochastic optimization
نویسندگان
چکیده
منابع مشابه
Accelerated randomized stochastic optimization1
We propose a general class of randomized gradient estimates to be employed in the recursive search of the minimum of an unknown multivariate regression function. Here only two observations per iteration step are used. As special cases it includes random direction stochastic approximation (Kushner and Clark), simultaneous perturbation stochastic approximation (Spall) and a special kernel based s...
متن کاملRandomized Smoothing for Stochastic Optimization
We analyze convergence rates of stochastic optimization algorithms for nonsmooth convex optimization problems. By combining randomized smoothing techniques with accelerated gradient methods, we obtain convergence rates of stochastic optimization procedures, both in expectation and with high probability, that have optimal dependence on the variance of the gradient estimates. To the best of our k...
متن کاملAccelerated Method for Stochastic Composition Optimization with Nonsmooth Regularization
Stochastic composition optimization draws much attention recently and has been successful in many emerging applications of machine learning, statistical analysis, and reinforcement learning. In this paper, we focus on the composition problem with nonsmooth regularization penalty. Previous works either have slow convergence rate, or do not provide complete convergence analysis for the general pr...
متن کاملAn Accelerated Method for Derivative-Free Smooth Stochastic Convex Optimization
We consider an unconstrained problem of minimization of a smooth convex function which is only available through noisy observations of its values, the noise consisting of two parts. Similar to stochastic optimization problems, the first part is of a stochastic nature. On the opposite, the second part is an additive noise of an unknown nature, but bounded in the absolute value. In the two-point ...
متن کاملAccelerated Gradient Methods for Stochastic Optimization and Online Learning
Regularized risk minimization often involves non-smooth optimization, either because of the loss function (e.g., hinge loss) or the regularizer (e.g., l1-regularizer). Gradient methods, though highly scalable and easy to implement, are known to converge slowly. In this paper, we develop a novel accelerated gradient method for stochastic optimization while still preserving their computational si...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: The Annals of Statistics
سال: 2003
ISSN: 0090-5364
DOI: 10.1214/aos/1059655913