Dualization of Subgradient Conditions for Optimality
نویسنده
چکیده
A basic relationship is derived between generalized subgradients of a given function, possibly nonsmooth and nonconvex, and those of a second function obtained from it by partial conjugation. Applications are made to the study of multiplier rules in finite-dimensional optimization and to the theory of the Euler-Lagrange condition and Hamiltonian condition in nonsmooth optimal control.
منابع مشابه
Ergodic Results in Subgradient Optimization
Subgradient methods are popular tools for nonsmooth, convex minimization , especially in the context of Lagrangean relaxation; their simplicity has been a main contribution to their success. As a consequence of the nonsmoothness, it is not straightforward to monitor the progress of a subgradient method in terms of the approximate fulllment of optimality conditions, since the subgradients used i...
متن کاملDynamic Subgradient Methods
Lagrangian relaxation is commonly used to generate bounds for mixed-integer linear programming problems. However, when the number of dualized constraints is very large (exponential in the dimension of the primal problem), explicit dualization is no longer possible. In order to reduce the dual dimension, different heuristics were proposed. They involve a separation procedure to dynamically selec...
متن کاملOn -optimality Conditions for Convex Set-valued Optimization Problems
In this paper, -subgradients for convex set-valued maps are defined. We prove an existence theorem for -subgradients of convex set-valued maps. Also, we give necessary optimality conditions for an -solution of a convex set-valued optimization problem (CSP). Moreover, using the single-valued function induced from the set-valued map, we obtain theorems describing the -subgradient sum formula for ...
متن کاملProto-derivative Formulas for Basic Subgradient Mappings in Mathematical Programming
Subgradient mappings associated with various convex and nonconvex functions are a vehicle for stating optimality conditions, and their proto-differentiability plays a role therefore in the sensitivity analysis of solutions to problems of optimization. Examples of special interest are the subgradients of the max of finitely many C functions, and the subgradients of the indicator of a set defined...
متن کاملErgodic Convergence in Subgradient Optimization
When nonsmooth, convex minimizationproblems are solved by subgradientoptimizationmethods, the subgradients used will in general not accumulate to subgradients which verify the optimal-ity of a solution obtained in the limit. It is therefore not a straightforward task to monitor the progress of a subgradient method in terms of the approximate fulllment of optimality conditions. Further, certain ...
متن کامل