Improving the Convergence of the Backpropagation Algorithm Using Learning Rate Adaptation Methods

نویسندگان

  • George D. Magoulas
  • Michael N. Vrahatis
  • George S. Androulakis
چکیده

This article focuses on gradient-based backpropagation algorithms that use either a common adaptive learning rate for all weights or an individual adaptive learning rate for each weight and apply the Goldstein/Armijo line search. The learning-rate adaptation is based on descent techniques and estimates of the local Lipschitz constant that are obtained without additional error function and gradient evaluations. The proposed algorithms improve the backpropagation training in terms of both convergence rate and convergence characteristics, such as stable learning and robustness to oscillations. Simulations are conducted to compare and evaluate the convergence behavior of these gradient-based training algorithms with several popular training methods.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Novel Fast Backpropagation Learning Algorithm Using Parallel Tangent and Heuristic Line Search

In gradient based learning algorithms, the momentum has usually an improving effect in convergence rate and decreasing the zigzagging phenomena. However it sometimes causes the convergence rate to decrease. The Parallel Tangent (ParTan) gradient is used as deflecting method to improve the convergence. From the implementation point of view, it is as simple as the momentum. In fact this method is...

متن کامل

Improving the Convergence of the Backpropagation Algorithm Using Local Adaptive Techniques

Since the presentation of the backpropagation algorithm, a vast variety of improvements of the technique for training a feed forward neural networks have been proposed. This article focuses on two classes of acceleration techniques, one is known as Local Adaptive Techniques that are based on weightspecific only, such as the temporal behavior of the partial derivative of the current weight. The ...

متن کامل

An Improved Backpropagation Method with Adaptive Learning Rate

A method improving the convergence rate of the backpropagation algorithm is proposed. This method adapts the learning rate using the Barzilai and Borwein [IMA J.Numer. Anal., 8, 141–148, 1988] steplength update for gradient descent methods. The determined learning rate is different for each epoch and depends on the weights and gradient values of the previous one. Experimental results show that ...

متن کامل

New Learning Automata Based Algorithms for Adaptation of Backpropagation Algorithm Parameters

One popular learning algorithm for feedforward neural networks is the backpropagation (BP) algorithm which includes parameters, learning rate (eta), momentum factor (alpha) and steepness parameter (lambda). The appropriate selections of these parameters have large effects on the convergence of the algorithm. Many techniques that adaptively adjust these parameters have been developed to increase...

متن کامل

Accelerating Backpropagation through Dynamic Self-adaptation

Standard backpropagation and many procedures derived from it use the steepest-descent method to minimize a cost function. In this paper, we present a new genetic algorithm, dynamic self-adaptation, to accelerate steepest descent as it is used in iterative procedures. The underlying idea is to take the learning rate of the previous step, to increase and decrease it slightly, to evaluate the cost...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Neural computation

دوره 11 7  شماره 

صفحات  -

تاریخ انتشار 1999