نتایج جستجو برای: nonconvex vector optimization

تعداد نتایج: 506335  

Journal: :SIAM journal on mathematics of data science 2022

Adaptivity is an important yet under-studied property in modern optimization theory. The gap between the state-of-the-art theory and current practice striking that algorithms with desirable theoretical guarantees typically involve drastically different settings of hyperparameters, such as step size schemes batch sizes, regimes. Despite appealing results, divisive strategies provide little, if a...

Journal: :Automatica 2023

Privacy protection and nonconvexity are two challenging problems in decentralized optimization learning involving sensitive data. Despite some recent advances addressing each of the separately, no results have been reported that theoretical guarantees on both privacy saddle/maximum avoidance nonconvex optimization. We propose a new algorithm for can enable rigorous differential avoiding perform...

Journal: :Journal of Industrial and Management Optimization 2022

<p style='text-indent:20px;'>This paper deals with the weak versions of vector variational-like inequalities, namely Stampacchia and Minty type under invexity in framework convexificators. The connection between both problems along link to optimization problem are analyzed. An application nonconvex mathematical programming has also been presented. Further, bi-level version these is formul...

Journal: :IEEE Signal Processing Letters 2021

Optimal power allocation for secure estimation of multiple deterministic parameters is investigated under a total constraint. The goal to minimize the Cramer-Rao lower bound (CRLB) at an intended receiver while keeping errors eavesdropper above specified target levels. To that end, optimization problem formulated by considering measurement models involving linear transformation parameter vector...

Journal: :J. Global Optimization 2015
Mengwei Xu Jane J. Ye Liwei Zhang

In this paper, we propose a smoothing augmented Lagrangian method for finding a stationary point of a nonsmooth and nonconvex optimization problem. We show that any accumulation point of the iteration sequence generated by the algorithm is a stationary point provided that the penalty parameters are bounded. Furthermore, we show that a weak version of the generalized Mangasarian Fromovitz constr...

Journal: :SIAM J. Scientific Computing 2009
Ian G. Grooms Robert Michael Lewis Michael W. Trosset

We describe a computational approach to the embedding problem in structural molecular biology. The approach is based on a dissimilarity parameterization of the problem that leads to a large-scale nonconvex bound constrained matrix optimization problem. The underlying idea is that an increased number of independent variables decouples the complicated effects of varying the location of individual...

2005
Hongxia Yin Donglei Du

The self-scaling quasi-Newton method solves an unconstrained optimization problem by scaling the Hessian approximation matrix before it is updated at each iteration to avoid the possible large eigenvalues in the Hessian approximation matrices of the objective function. It has been proved in the literature that this method has the global and superlinear convergence when the objective function is...

Journal: :CoRR 2016
Jinshan Zeng Wotao Yin

Consensus optimization has received considerable attention in recent years. A number of decentralized algorithms have been proposed for convex consensus optimization. However, on consensus optimization with nonconvex objective functions, our understanding to the behavior of these algorithms is limited. When we lose convexity, we cannot hope for obtaining globally optimal solutions (though we st...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید