نتایج جستجو برای: nonconvex vector optimization
تعداد نتایج: 506335 فیلتر نتایج به سال:
Adaptivity is an important yet under-studied property in modern optimization theory. The gap between the state-of-the-art theory and current practice striking that algorithms with desirable theoretical guarantees typically involve drastically different settings of hyperparameters, such as step size schemes batch sizes, regimes. Despite appealing results, divisive strategies provide little, if a...
Privacy protection and nonconvexity are two challenging problems in decentralized optimization learning involving sensitive data. Despite some recent advances addressing each of the separately, no results have been reported that theoretical guarantees on both privacy saddle/maximum avoidance nonconvex optimization. We propose a new algorithm for can enable rigorous differential avoiding perform...
<p style='text-indent:20px;'>This paper deals with the weak versions of vector variational-like inequalities, namely Stampacchia and Minty type under invexity in framework convexificators. The connection between both problems along link to optimization problem are analyzed. An application nonconvex mathematical programming has also been presented. Further, bi-level version these is formul...
Optimal power allocation for secure estimation of multiple deterministic parameters is investigated under a total constraint. The goal to minimize the Cramer-Rao lower bound (CRLB) at an intended receiver while keeping errors eavesdropper above specified target levels. To that end, optimization problem formulated by considering measurement models involving linear transformation parameter vector...
In this paper, we propose a smoothing augmented Lagrangian method for finding a stationary point of a nonsmooth and nonconvex optimization problem. We show that any accumulation point of the iteration sequence generated by the algorithm is a stationary point provided that the penalty parameters are bounded. Furthermore, we show that a weak version of the generalized Mangasarian Fromovitz constr...
We describe a computational approach to the embedding problem in structural molecular biology. The approach is based on a dissimilarity parameterization of the problem that leads to a large-scale nonconvex bound constrained matrix optimization problem. The underlying idea is that an increased number of independent variables decouples the complicated effects of varying the location of individual...
The self-scaling quasi-Newton method solves an unconstrained optimization problem by scaling the Hessian approximation matrix before it is updated at each iteration to avoid the possible large eigenvalues in the Hessian approximation matrices of the objective function. It has been proved in the literature that this method has the global and superlinear convergence when the objective function is...
Consensus optimization has received considerable attention in recent years. A number of decentralized algorithms have been proposed for convex consensus optimization. However, on consensus optimization with nonconvex objective functions, our understanding to the behavior of these algorithms is limited. When we lose convexity, we cannot hope for obtaining globally optimal solutions (though we st...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید