Subgradient methods for huge-scale optimization problems

نویسندگان

چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Subgradient methods for huge-scale optimization problems

We consider a new class of huge-scale problems, the problems with sparse subgradients. The most important functions of this type are piece-wise linear. For optimization problems with uniform sparsity of corresponding linear operators, we suggest a very efficient implementation of subgradient iterations, which total cost depends logarithmically in the dimension. This technique is based on a recu...

متن کامل

2012 / 2 Subgradient methods for huge - scale optimization problems

We consider a new class of huge-scale problems, the problems with sparse subgradients. The most important functions of this type are piece-wise linear. For optimization problems with uniform sparsity of corresponding linear operators, we suggest a very efficient implementation of subgradient iterations, which total cost depends logarithmically in the dimension. This technique is based on a recu...

متن کامل

Primal-Dual Subgradient Method for Huge-Scale Linear Conic Problems

In this paper we develop a primal-dual subgradient method for solving huge-scale Linear Conic Optimization Problems. Our main assumption is that the primal cone is formed as a direct product of many small-dimensional convex cones, and that the matrix A of corresponding linear operator is uniformly sparse. In this case, our method can approximate the primal-dual optimal solution with accuracy ε ...

متن کامل

Efficiency of Coordinate Descent Methods on Huge-Scale Optimization Problems

In this paper we propose new methods for solving huge-scale optimization problems. For problems of this size, even the simplest full-dimensional vector operations are very expensive. Hence, we propose to apply an optimization technique based on random partial update of decision variables. For these methods, we prove the global estimates for the rate of convergence. Surprisingly enough, for cert...

متن کامل

Inexact subgradient methods for quasi-convex optimization problems

In this paper, we consider a generic inexact subgradient algorithm to solve a nondifferentiable quasi-convex constrained optimization problem. The inexactness stems from computation errors and noise, which come from practical considerations and applications. Assuming that the computational errors and noise are deterministic and bounded, we study the effect of the inexactness on the subgradient ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Mathematical Programming

سال: 2013

ISSN: 0025-5610,1436-4646

DOI: 10.1007/s10107-013-0686-4