نتایج جستجو برای: monotone linear complementarity problem
تعداد نتایج: 1293150 فیلتر نتایج به سال:
In this paper we consider a proximal point algorithm (PPA) for solving the nonlinear complementarity problem (NCP) with a P 0 function. PPA was originally proposed by Martinet and further developed by Rockafel-lar for monotone variational inequalities and monotone operator problems. PPA is known to have nice convergence properties under mild conditions. However, until now, it has been applied m...
A reformulation of the nonlinear complementarity problem (NCP) as an unconstrained minimization problem is considered. It is shown that any stationary point of the unconstrained objective function is already a solution of NCP if the mapping F involved in NCP is continuously differentiable and monotone. A descent algorithm is described which uses only function values of F. Some numerical results...
We present a new interior-point potential-reduction algorithm for solving monotone linear complementarity problems (LCPs) that have a particular special structure: their matrix M ∈ Rn×n can be decomposed as M = ΦU +Π0, where the rank of Φ is k < n, and Π0 denotes Euclidean projection onto the nullspace of Φ⊤. We call such LCPs projective. Our algorithm solves a monotone projective LCP to relati...
A power penalty approach has been proposed to linear complementarity problem but not to Horizontal Linear Complementarity Problem (HLCP) because the coefficient matrix is not positive definite. It is skillfully proved that HLCP is equivalent to a variational inequality problem and a mixed linear complementarity problem for the first time. A power penalty approach is proposed to the mixed linear...
We consider a generalized proximal point method (GPPA) for solving the nonlinear complementarity problem with monotone operators in R ' \ lt differs from the classical proximal point method discussed by Rockafellar for the problem offinding zeroes of monotone operators in the use of generalized distances, called (p-divergences, instead of the Euclidean one. These distances play not only a regul...
Given an imprecise probabilistic model over a continuous space, computing lower/upper expectations is often computationally hard to achieve, even in simple cases. Because expectations are essential in decision making and risk analysis, tractable methods to compute them are crucial in many applications involving imprecise probabilistic models. We concentrate on p-boxes (a simple and popular mode...
A smooth approximation p(x;) to the plus function: maxfx; 0g, is obtained by integrating the sigmoid function 1=(1 + e ?x), commonly used in neural networks. By means of this approximation, linear and convex inequalities are converted into smooth, convex unconstrained minimization problems, the solution of which approximates the solution of the original problem to a high degree of accuracy for ...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید