نتایج جستجو برای: global minimizer

تعداد نتایج: 449234  

Journal: :SIAM Journal on Optimization 2015
Xi Yin Zheng Kung Fu Ng

Using techniques of variational analysis and dual techniques for smooth conjugate functions, for a local minimizer of a proper lower semicontinuous function f on a Banach space, p ∈ (0, +∞) and q = 1+p p , we prove that the following two properties are always equivalent: (i) x̄ is a stable q-order minimizer of f and (ii) x̄ is a tilt-stable p-order minimizer of f . We also consider their relation...

Journal: :Neural computation 2006
Nicola Ancona Sebastiano Stramaglia

We consider kernel-based learning methods for regression and analyze what happens to the risk minimizer when new variables, statistically independent of input and target variables, are added to the set of input variables. This problem arises, for example, in the detection of causality relations between two time series. We find that the risk minimizer remains unchanged if we constrain the risk m...

Journal: :J. Global Optimization 2014
Alexandre Goldsztejn Ferenc Domes Brice Chevalier

Three rejection tests for multi-objective optimization problems based on first order optimality conditions are proposed. These tests can certify that a box does not contain any local minimizer, and thus it can be excluded from the search process. They generalize previously proposed rejection tests in several regards: Their scope include inequality and equality constrained smooth or nonsmooth mu...

Journal: :Comp. Opt. and Appl. 2014
Xiaojun Chen Weijun Zhou

The iteratively reweighted l1 minimization algorithm (IRL1) has been widely used for variable selection, signal reconstruction and image processing. In this paper, we show that any sequence generated by the IRL1 is bounded and any accumulation point is a stationary point of the l2-lp minimization problem with 0 < p < 1. Moreover, the stationary point is a global minimizer and the convergence ra...

2005
Yufeng LIU Xiaotong SHEN Hani DOSS

Many margin-based binary classification techniques such as support vector machine (SVM) andψ-learning deliver high performance. An earlier article proposed a new multicategory ψ-learning methodology that shows great promise in generalization ability. However, ψ-learning is computationally difficult because it requires handling a nonconvex minimization problem. In this article, we propose two co...

2011
Mats Andersson Oleg Burdakov Hans Knutsson Spartak Zikrin

The multilinear least-squares (MLLS) problem is an extension of the linear leastsquares problem. The difference is that a multilinear operator is used in place of a matrix-vector product. The MLLS is typically a large-scale problem characterized by a large number of local minimizers. It originates, for instance, from the design of filter networks. We present a global search strategy that allows...

2014
Thomas F. Coleman Yuying Li T. F. Coleman Y. Y. Li

Many high dimensional data mining problems can be formulated as minimizing an empirical loss function with a penalty proportional to the number of variables required to describe a model. We propose a graduated non-convexification method to facilitate tracking of a global minimizer of this problem. We prove that under some conditions the proposed regularization problem using the continuous piece...

2003
Petr Fišer

A novel two-level Boolean minimization method is presented here. In contrast to classical methods the cover of the on-set is computed first, whilst no implicants are known to this phase. The implicants are being derived from the source terms by their expansion directed by the cover. This allows us to generate group implicants directly, avoiding the time-consuming implicant expansions and reduct...

Journal: :Math. Program. 2009
William W. Hager Bernard A. Mair Hongchao Zhang

We develop an affine-scaling algorithm for box-constrained optimization which has the property that each iterate is a scaled cyclic Barzilai–Borwein (CBB) gradient iterate that lies in the interior of the feasible set. Global convergence is established for a nonmonotone line search, while there is local R-linear convergence at a nondegenerate local minimizer where the second-order sufficient op...

Journal: :J. Comb. Optim. 1998
Minyue Fu Zhi-Quan Luo Yinyu Ye

We consider the problem of approximating the global minimum of a general quadratic program (QP) with n variables subject to m ellipsoidal constraints. For m = 1, we rigorously show that an-minimizer, where error 2 (0; 1), can be obtained in polynomial time, meaning that the number of arithmetic operations is a polynomial in n, m, and log(1==). For m 2, we present a polynomial-time (1 ? 1 m 2)-a...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید