The Linear Nonconvex Generalized Gradient
نویسنده
چکیده
A new nonconvex generalized gradient is deened and some of its calculus is developed. This generalized gradient is smaller than that of Mordukhovich but still has a good calculus. This calculus includes a rule for the linear generalized gradient of positive multiples of functions, a sum rule and a chain rule.
منابع مشابه
An effective optimization algorithm for locally nonconvex Lipschitz functions based on mollifier subgradients
متن کامل
Lagrange Multipliers for Nonconvex Generalized Gradients with Equality, Inequality and Set Constraints
A Lagrange multiplier rule for nite dimensional Lipschitz problems is proven that uses a nonconvex generalized gradient. This result uses either both the linear generalized gradient and the generalized gradient of Mordukhovich or the linear generalized gradient and a qualiication condition involving the pseudo-Lipschitz behavior of the feasible set under perturbations. The optimization problem ...
متن کاملRegularized M-estimators with nonconvexity: Statistical and algorithmic theory for local optima
We establish theoretical results concerning local optima of regularized M estimators, where both loss and penalty functions are allowed to be nonconvex. Our results show that as long as the loss satisfies restricted strong convexity and the penalty satisfies suitable regularity conditions, any local optimum of the composite objective lies within statistical precision of the true parameter vecto...
متن کاملThe Linear Nonconvex Generalized Gradient and Lagrange Multipliers
A Lagrange multiplierrules that uses small generalized gradients is introduced. It includes both inequality and set constraints. The generalized gradient is the linear generalized gradient. It is smaller than the generalized gradients of Clarke and Mordukhovich but retains much of their nice calculus. Its convex hull is the generalized gradient of Michel and Penot if a function is Lipschitz. Th...
متن کاملConvergence Analysis of Proximal Gradient with Momentum for Nonconvex Optimization
In many modern machine learning applications, structures of underlying mathematical models often yield nonconvex optimization problems. Due to the intractability of nonconvexity, there is a rising need to develop efficient methods for solving general nonconvex problems with certain performance guarantee. In this work, we investigate the accelerated proximal gradient method for nonconvex program...
متن کامل