نتایج جستجو برای: gradient algorithm

تعداد نتایج: 859818  

Journal: :Int. J. Math. Mathematical Sciences 2011
Abdellatif Moudafi Eman Al-Shemas

This paper is concerned with the study of a penalization-gradient algorithm for solving variational inequalities, namely, find x ∈ C such that 〈Ax, y − x〉 ≥ 0 for all y ∈ C, where A : H → H is a single-valued operator, C is a closed convex set of a real Hilbert space H. Given Ψ : H → ∪ { ∞} which acts as a penalization function with respect to the constraint x ∈ C, and a penalization parameter ...

2017
Shuxia Lu Zhao Jin

In order to improve the efficiency and classification ability of Support vector machines (SVM) based on stochastic gradient descent algorithm, three algorithms of improved stochastic gradient descent (SGD) are used to solve support vector machine, which are Momentum, Nesterov accelerated gradient (NAG), RMSprop. The experimental results show that the algorithm based on RMSprop for solving the l...

2006
Moez Mrad Sana Ben Hamida

We propose a new method based on evolutionary optimization for obtaining an optimal L-quantizer of a multidimensional random variable. First, we remind briefly the main results about quantization. Then, we present the classical gradient-based approach (this approach is well detailed in [2] and [7] for p=2) used up to now to find a “local” optimal L-quantizer. Then, we give an algorithm that per...

Journal: :Signal Processing 2003
C. F. So Sin Chun Ng Shu Hung Leung

The recursive least squares (RLS) algorithm is well known for its good convergence property and small mean square error in stationary environments. However RLS using constant forgetting factor cannot provide satisfactory performance in time varying environments. In this seminar, three variable forgetting factor (VFF) adaptation schemes for RLS are presented in order to improve the tracking perf...

2008
Jinseog Kim Yuwon Kim Yongdai Kim

LASSO is a useful method for achieving both shrinkage and variable selection simultaneously. The main idea of LASSO is to use the L1 constraint in the regularization step which has been applied to various models such as wavelets, kernel machines, smoothing splines, and multiclass logistic models. We call such models with the L1 constraint generalized LASSO models. In this paper, we propose a ne...

Journal: :SIAM J. Scientific Computing 2016
Nicole Spillane

This article introduces and analyzes a new adaptive algorithm for solving symmetric positive definite linear systems in cases where several preconditioners are available or the usual preconditioner is a sum of contributions. A new theoretical result allows to select, at each iteration, whether a classical preconditioned CG iteration is sufficient (i.e., the error decreases by a factor of at lea...

Journal: :J. Computational Applied Mathematics 2010
Neculai Andrei

New accelerated nonlinear conjugate gradient algorithms which are mainly modifications of the Dai and Yuan’s for unconstrained optimization are proposed. Using the exact line search, the algorithm reduces to the Dai and Yuan conjugate gradient computational scheme. For inexact line search the algorithm satisfies the sufficient descent condition. Since the step lengths in conjugate gradient algo...

2007
Batuhan Ulug Stanley C. Ahalt

Vector Quantization (VQ) has its origins in signal processing where it is used for compact, accurate representation of input signals. However, since VQ induces a partitioning of the input space, it can also be used for statistical pattern recognition. In this paper we present a novel gradient descent VQ classi cation algorithm which minimizes the Bayes Risk, which we refer to as the Generalized...

2006
Norbert Jankowski

A new method of feature weighting, useful also for feature extraction has been described. It is quite efficient and gives quite accurate results. Weighting algorithm may be used with any kind of learning algorithm. The weighting algorithm with k-nearest neighbors model was used to estimate the best feature base for a given distance measure. Results obtained with this algorithm clearly show its ...

2013
Y. H. Hu C. Li X. Q. Yang

In this paper, we propose a proximal gradient algorithm for solving a general nonconvex and nonsmooth optimization model of minimizing the summation of a C1,1 function and a grouped separable lsc function. This model includes the group sparse optimization via lp,q regularization as a special case. Our algorithmic scheme presents a unified framework for several well-known iterative thresholding ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید