نتایج جستجو برای: nonsmooth convex optimization problem
تعداد نتایج: 1134849 فیلتر نتایج به سال:
We propose a scalable method for semi-supervised (transductive) learning from massive network-structured datasets. Our approach to semi-supervised learning is based on representing the underlying hypothesis as a graph signal with small total variation. Requiring a small total variation of the graph signal representing the underlying hypothesis corresponds to the central smoothness assumption th...
In this paper we present a bundle method for solving a generalized variational inequality problem. This problem consists in nding a zero of the sum of two multivalued operators de ned on a real Hilbert space. The rst one is monotone and the second one is the subdi erential of a lower semicontinuous proper convex function. The method is based on the auxiliary problem principle due to Cohen and t...
We analyze convergence rates of stochastic optimization algorithms for nonsmooth convex optimization problems. By combining randomized smoothing techniques with accelerated gradient methods, we obtain convergence rates of stochastic optimization procedures, both in expectation and with high probability, that have optimal dependence on the variance of the gradient estimates. To the best of our k...
In this paper, we study a semi-infinite programming (SIP) problem with a convex set constraint. Using the value function of the lower level problem, we reformulate SIP problem as a nonsmooth optimization problem. Using the theory of nonsmooth Lagrange multiplier rules and Danskin’s theorem, we present constraint qualifications and necessary optimality conditions. We propose a new numerical meth...
Sequential optimality conditions provide adequate theoretical tools to justify stopping criteria for nonlinear programming solvers. Here, nonsmooth approximate gradient projection and complementary approximate Karush-Kuhn-Tucker conditions are presented. These sequential optimality conditions are satisfied by local minimizers of optimization problems independently of the fulfillment of constrai...
We investigate the stochastic optimization problem of minimizing population risk, where loss defining risk is assumed to be weakly convex. Compositions Lipschitz convex functions with smooth maps are primary examples such losses. analyze estimation quality nonsmooth and nonconvex problems by their sample average approximations. Our main results establish dimension-dependent rates on subgradient...
In this paper, we study the nonconvex nonsmooth optimization problem (P) of minimizing a tangentially convex function with inequality constraints where constraint functions are convex. This is done by using cone tangential subdifferentials together new qualification. Indeed, present qualification to guarantee that Karush-Kuhn-Tucker conditions necessary and sufficient for optimality (P). Moreov...
New mixed integer nonlinear optimization models for the Euclidean Steiner tree problem in d-space (with $$d\ge 3$$ ) will be presented this work. All feature a nonsmooth objective function but continuous relaxations of their set feasible solutions are convex. From these models, four convex linear and considered. Each relaxation has same as model from which it is derived. Finally, preliminary co...
We extend the well-known BFGS quasi-Newton method and its limited-memory variant (LBFGS) to the optimization of nonsmooth convex objectives. This is done in a rigorous fashion by generalizing three components of BFGS to subdifferentials: The local quadratic model, the identification of a descent direction, and the Wolfe line search conditions. We apply the resulting subLBFGS algorithm to L2-reg...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید