نتایج جستجو برای: interior point method

تعداد نتایج: 2080526  

1994
Amal de Silva David Abramson

This paper describes a parallel implementation of the primal-dual interior point method for a special class of large linear programs that occur in stochastic linear programming. The method used by Vanderbei and Carpenter [31] for removing dense columns is modi ed to eliminate variables which link blocks in stochastic linear programs. The algorithm developed was tested on six test problems from ...

Journal: :Math. Program. 1996
Stephen J. Wright Yin Zhang

We consider a modiication of a path-following infeasible-interior-point algorithm described by Wright. In the new algorithm, we attempt to improve each major iterate by reusing the coeecient matrix factors from the latest step. We show that the modiied algorithm has similar theoretical global convergence properties to those of the earlier algorithm, while its asymptotic convergence rate can be ...

Journal: :Math. Program. 2011
Lifeng Chen Donald Goldfarb

We present an interior-point penalty method for nonlinear programming (NLP), where the merit function consists of a piecewise linear penalty function (PLPF) and an `2-penalty function. The PLPF is defined by a set of penalty parameters that correspond to break points of the PLPF and are updated at every iteration. The `2-penalty function, like traditional penalty functions for NLP, is defined b...

1994
Florian A. Potra

A predictor-corrector method for solving the P (k)-matrix linear complementarity problems from infeasible starting points is analyzed. Two matrix factorizations and at most three backsolves are to be computed at each iteration. The computational complexity depends on the quality of the starting points. If the starting points are large enough then the algorithm has O ? (+ 1) 2 nL iteration compl...

Journal: :Comp. Opt. and Appl. 2008
R. Silva João Soares Luís N. Vicente

In this paper we analyze the rate of local convergence of the Newton primal-dual interiorpoint method when the iterates are kept strictly feasible with respect to the inequality constraints. It is shown under the classical conditions that the rate is q–quadratic when the functions associated to the binding inequality constraints are concave. In general, the q–quadratic rate is achieved provided...

2009
Ribana Roscher Wolfgang Förstner

Logistic regression has been widely used in classification tasks for many years. Its optimization in case of linear separable data has received extensive study due to the problem of a monoton likelihood. This paper presents a new approach, called bounded logistic regression (BLR), by solving the logistic regression as a convex optimization problem with constraints. The paper tests the accuracy ...

Journal: :Math. Program. 1996
Tao Wang Renato D. C. Monteiro Jong-Shi Pang

We study the problem of solving a constrained system of nonlinear equations by a combination of the classical damped Newton method for (unconstrained) smooth equations and the recent interior point potential reduction methods for linear programs, linear and nonlin-ear complementarity problems. In general, constrained equations provide a uniied formulation for many mathematical programming probl...

Journal: :Optimization Methods and Software 2005
Sven Leyffer

Equilibrium equations in the form of complementarity conditions often appear as constraints in optimization problems. Problems of this type are commonly referred to as mathematical programs with complementarity constraints (MPCCs). A popular method for solving MPCCs is the penalty interior-point algorithm (PIPA). This paper presents a small example for which PIPA converges to a nonstationary po...

Journal: :SIAM Journal on Optimization 2010
Guoyong Gu Kees Roos

Roos proved that the devised full-step infeasible algorithm has O(n) worst-case iteration complexity. This complexity bound depends linearly on a parameter ¯ κ(ζ), which is proved to be less than √ 2n. Based on extensive computational evidence (hundreds of thousands of randomly generated problems), Roos conjectured that ¯ κ(ζ) = 1 (Conjecture 5.1 in the above-mentioned paper), which would yield...

Journal: :Kybernetika 2010
Ladislav Luksan Ctirad Matonoha Jan Vlcek

In this paper, we propose a primal interior-point method for large sparse generalized minimax optimization. After a short introduction, where the problem is stated, we introduce the basic equations of the Newton method applied to the KKT conditions and propose a primal interior-point method. Next we describe the basic algorithm and give more details concerning its implementation covering numeri...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید