Ergodic Convergence in Subgradient Optimization
نویسنده
چکیده
When nonsmooth, convex minimizationproblems are solved by subgradientoptimizationmethods, the subgradients used will in general not accumulate to subgradients which verify the optimal-ity of a solution obtained in the limit. It is therefore not a straightforward task to monitor the progress of a subgradient method in terms of the approximate fulllment of optimality conditions. Further, certain supplementary information, such as convergent estimates of Lagrange multipliers and convergent lower bounds on the optimal objective value, is not directly available in subgradi-ent schemes. As a means of overcoming these weaknesses in subgradient methods, we introduce the computation of an ergodic (averaged) sequence of subgradients. Speciically, we consider a nonsmooth, convex program solved by a conditional subgradient optimization scheme with divergent series step lengths, and show that the elements of the ergodic sequence of subgradients in the limit fulll the optimality conditions at the optimal solution, to which the sequence of iterates converges. This result has three important implications. First, it enables the nite identiication of active constraints at the solution obtained in the limit. Second, it is used to establish the ergodic convergence of sequences of Lagrange multipliers; this result enables us to carry out sensitivity analyses for solutions obtained by subgradient methods. The third implication is the convergence of a lower bounding procedure based on an ergodic sequence of aane underestimates of the objective function; this procedure provides a proper termination criterion for subgradient methods.
منابع مشابه
Ergodic Results in Subgradient Optimization
Subgradient methods are popular tools for nonsmooth, convex minimization , especially in the context of Lagrangean relaxation; their simplicity has been a main contribution to their success. As a consequence of the nonsmoothness, it is not straightforward to monitor the progress of a subgradient method in terms of the approximate fulllment of optimality conditions, since the subgradients used i...
متن کاملErgodic, primal convergence in dual subgradient schemes for convex programming
Lagrangean dualization and subgradient optimization techniques are frequently used within the field of computational optimization for finding approximate solutions to large, structured optimization problems. The dual subgradient scheme does not automatically produce primal feasible solutions; there is an abundance of techniques for computing such solutions (via penalty functions, tangential app...
متن کاملErgodic, primal convergence in dual subgradient schemes for convex programming, II: the case of inconsistent primal problems
Consider the utilization of a Lagrangian dual method which is convergent for consistent convex optimization problems. When it is used to solve an infeasible optimization problem, its inconsistency will then manifest itself through the divergence of the sequence of dual iterates. Will then the sequence of primal subproblem solutions still yield relevant information regarding the primal program? ...
متن کاملConvergence rate analysis of several splitting schemes
Splitting schemes are a class of powerful algorithms that solve complicated monotone inclusions and convex optimization problems that are built from many simpler pieces. They give rise to algorithms in which the simple pieces of the decomposition are processed individually. This leads to easily implementable and highly parallelizable algorithms, which often obtain nearly state-of-the-art perfor...
متن کامل