Speeding up the Training of Lattice–Ladder Multilayer Perceptrons

نویسنده

  • Dalius Navakauskas
چکیده

A lattice–ladder multilayer perceptron (LLMLP) is an appealing structure for advanced signal processing in a sense that it is nonlinear, possesses infinite impulse response and stability monitoring of it during training is simple. However, even moderate implementation of LLMLP training hinders the fact that a lot of storage and computation power must be allocated. In this paper we deal with the problem of computational efficiency of LLMLP training algorithms that are based on computation of gradients, e.g., backpropagation, conjugate-gradient or Levenberg-Marquardt. The paper aims to explore most computationally demanding calculations—computation of gradients for lattice (rotation) parameters. Here we find and propose to use for training of several LLMLP architectures a simplest in terms of storage and number of delay elements computation of exact gradients, assuming that the coefficients of the lattice–ladder filter are held stationary.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Quick Training Algorithm for Extra Reduced Size Lattice-Ladder Multilayer Perceptrons

A quick gradient training algorithm for a specific neural network structure called an extra reduced size lattice-ladder multilayer perceptron is introduced. Presented derivation of the algorithm utilizes recently found by author simplest way of exact computation of gradients for rotation parameters of lattice-ladder filter. Developed neural network training algorithm is optimal in terms of mini...

متن کامل

Training Algorithm for Extra Reduced Size Lattice–Ladder Multilayer Perceptrons

A quick gradient training algorithm for a specific neural network structure called an extra reduced size lattice–ladder multilayer perceptron is introduced. Presented derivation of the algorithm utilizes recently found by author simplest way of exact computation of gradients for rotation parameters of lattice–ladder filter. Developed neural network training algorithm is optimal in terms of mini...

متن کامل

Comparing Hybrid Systems to Design and Optimize Artificial Neural Networks

In this paper we conduct a comparative study between hybrid methods to optimize multilayer perceptrons: a model that optimizes the architecture and initial weights of multilayer perceptrons; a parallel approach to optimize the architecture and initial weights of multilayer perceptrons; a method that searches for the parameters of the training algorithm, and an approach for cooperative co-evolut...

متن کامل

The Effect of Training Set Size for the Performance of Neural Networks of Classification

Even though multilayer perceptrons and radial basis function networks belong to the class of artificial neural networks and they are used for similar tasks, they have very different structures and training mechanisms. So, some researchers showed better performance with radial basis function networks, while others showed some different results with multilayer perceptrons. This paper compares the...

متن کامل

Improve an Efficiency of Feedforward Multilayer Perceptrons by Serial Training

The Feedforward Multilayer Perceptrons network is a widely used model in Artificial Neural Network using the backpropagation algorithm for real world data. There are two common ways to construct Feedforward Multilayer Perceptrons network, that is, either taking a large network and then pruning away the irrelevant nodes or starting from a small network and then adding new relevant nodes. An Arti...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2002