نتایج جستجو برای: gradient descent algorithm

تعداد نتایج: 869527  

Journal: :JNW 2014
Daming Gu

In order to solve the traditional network optimization problem, the dual gradient descent algorithm is adopted. Although the algorithm can be realized by a distributed method, its convergence rate is slow. Acceleration dual decline algorithm mainly uses the distributed calculation of the approximative Newton step to improve its convergence rate. Due to the uncertainty characteristic of communic...

2012
Ying XIAO

The cost function of constant modulus algorithm (CMA) is simplified to meet second norm form, and the blind equalizer can use recurrent least squares (RLS) algorithm to update the weights. Considering the underwater acoustic channel is usually nonlinear, decision feedback equalizer is used as the blind equalizer. According to the simplified cost function of CMA, the weights of forward part and ...

‎Based on an eigenvalue analysis‎, ‎a new proof for the sufficient‎ ‎descent property of the modified Polak-Ribière-Polyak conjugate‎ ‎gradient method proposed by Yu et al‎. ‎is presented‎.

2017

Momentum based stochastic gradient methods such as heavy ball (HB) and Nesterov’s accelerated gradient descent (NAG) method are widely used in practice for training deep networks and other supervised learning models, as they often provide significant improvements over stochastic gradient descent (SGD). In general, “fast gradient” methods have provable improvements over gradient descent only for...

2013
Lijun Zhang Mehrdad Mahdavi Rong Jin

For smooth and strongly convex optimizations, the optimal iteration complexity of the gradient-based algorithm is O( √ κ log 1/ǫ), where κ is the condition number. In the case that the optimization problem is ill-conditioned, we need to evaluate a large number of full gradients, which could be computationally expensive. In this paper, we propose to remove the dependence on the condition number ...

2014
Bhavna Sharma K. Venugopalan

Classification is one of the most important task in application areas of artificial neural networks (ANN).Training neural networks is a complex task in the supervised learning field of research. The main difficulty in adopting ANN is to find the most appropriate combination of learning, transfer and training function for the classification task. We compared the performances of three types of tr...

2011
P. Moallem S. A. Ayoughi

In feed forward neural networks, hidden layer neurons’ saturation conditions, which are the cause of flat spots on the error surface, is one of the main disadvantages of any conventional gradient descent learning algorithm. In this paper, we propose a novel complementary scheme for the learning based on a suitable combination of anti saturated hidden neurons learning process and accelerating me...

2009
Mehdi Ghayoumi

We have applied new accelerated algorithm for linear discriminate analysis (LDA) in face recognition with support vector machine. The new algorithm has the advantage of optimal selection of the step size. The gradient descent method and new algorithm has been implemented in software and evaluated on the Yale face database B. The eigenfaces of these approaches have been used to training a KNN. R...

2003
Qi Li Biing-Hwang Juang

This paper presents a fast discriminative training algorithm for sequences of observations. It considers a sequence of feature vectors as one single composite token in training or testing. In contrast to the traditional EM algorithm, this algorithm is derived from a discriminative objective, aiming at directly minimizing the recognition error. Compared to the gradient-descent algorithms for dis...

Journal: :SIAM Journal on Optimization 2013
Ankan Saha Ambuj Tewari

Cyclic coordinate descent is a classic optimization method that has witnessed a resurgence of interest in Signal Processing, Statistics and Machine Learning. Reasons for this renewed interest include the simplicity, speed, and stability of the method as well as its competitive performance on `1 regularized smooth optimization problems. Surprisingly, very little is known about its non-asymptotic...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید