Gradient projection method for convex function and strongly convex set**Supported by the Russian Foundation for Basic Research, grant 13-01-00295.

نویسندگان

چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Alternating Proximal Gradient Method for Convex Minimization

In this paper, we propose an alternating proximal gradient method that solves convex minimization problems with three or more separable blocks in the objective function. Our method is based on the framework of alternating direction method of multipliers. The main computational effort in each iteration of the proposed method is to compute the proximal mappings of the involved convex functions. T...

متن کامل

On generalized Hermite-Hadamard inequality for generalized convex function

In this paper, a new inequality for generalized convex functions which is related to the left side of generalized Hermite-Hadamard type inequality is obtained. Some applications for some generalized special means are also given.

متن کامل

Strongly almost ideal convergent sequences in a locally convex space defined by Musielak-Orlicz function

In this article, we introduce a new class of ideal convergent sequence spaces using an infinite matrix, Musielak-Orlicz function and a new generalized difference matrix in locally convex spaces. We investigate some linear topological structures and algebraic properties of these spaces. We also give some relations related to these sequence spaces.

متن کامل

Efficient Stochastic Gradient Descent for Strongly Convex Optimization

We motivate this study from a recent work on a stochastic gradient descent (SGD) method with only one projection (Mahdavi et al., 2012), which aims at alleviating the computational bottleneck of the standard SGD method in performing the projection at each iteration, and enjoys an O(log T/T ) convergence rate for strongly convex optimization. In this paper, we make further contributions along th...

متن کامل

Making Gradient Descent Optimal for Strongly Convex Stochastic Optimization

Stochastic gradient descent (SGD) is a simple and popular method to solve stochastic optimization problems which arise in machine learning. For strongly convex problems, its convergence rate was known to be O(log(T )/T ), by running SGD for T iterations and returning the average point. However, recent results showed that using a different algorithm, one can get an optimal O(1/T ) rate. This mig...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IFAC-PapersOnLine

سال: 2015

ISSN: 2405-8963

DOI: 10.1016/j.ifacol.2015.11.085