Convergence of Batch BP Algorithm with Penalty for FNN Training

نویسندگان

  • Wei Wu
  • Hongmei Shao
  • Zhengxue Li
چکیده

Penalty methods have been commonly used to improve the generalization performance of feedforward neural networks and to control the magnitude of the network weights. Weight boundedness and convergence results are presented for the batch BP algorithm with penalty for training feedforward neural networks with a hidden layer. A key point of the proofs is the monotonicity of the error function with the penalty term during the training iteration. A relationship between the learning rate parameter and the penalty parameter is proposed to guarantee the convergence. The algorithm is applied to two classification problems to support our theoretical findings.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Modified Grey Wolf Optimizer by Individual Best Memory and Penalty Factor for Sonar and Radar Dataset Classification

Meta-heuristic Algorithms (MA) are widely accepted as excellent ways to solve a variety of optimization problems in recent decades. Grey Wolf Optimization (GWO) is a novel Meta-heuristic Algorithm (MA) that has been generated a great deal of research interest due to its advantages such as simple implementation and powerful exploitation. This study proposes a novel GWO-based MA and two extra fea...

متن کامل

Convergence Analysis of Batch Normalization for Deep Neural Nets

Batch normalization (BN) is very effective in accelerating the convergence of a neural network training phase that it has become a common practice. We propose a generalization of BN, the diminishing batch normalization (DBN) algorithm. We provide an analysis of the convergence of the DBN algorithm that converges to a stationary point with respect to trainable parameters. We analyze a two layer ...

متن کامل

A Hybrid Differential Evolution and Back-Propagation Algorithm for Feedforward Neural Network Training

In this study a hybrid differential evolution-back-propagation algorithm to optimize the weights of feedforward neural network is proposed.The hybrid algorithm can achieve faster convergence speed with higher accuracy. The proposed hybrid algorithm combining differential evolution (DE) and back-propagation (BP) algorithm is referred to as DE-BP algorithm to train the weights of the feed-forward...

متن کامل

An Improved Hybrid Algorithm Based on PSO and BP for Feedforward Neural Networks

In this paper, an improved hybrid algorithm combining particle swarm optimization (PSO) with backpropagation algorithm (BP) is proposed to train feedforward neural networks (FNN). PSO is a global search algorithm, but the swarm in PSO is easy to lose its diversity, which results in premature convergence. On the other hand, BP algorithm is a gradient-descent-based method which has good local sea...

متن کامل

Search Based Weighted Multi-Bit Flipping Algorithm for High-Performance Low-Complexity Decoding of LDPC Codes

In this paper, two new hybrid algorithms are proposed for decoding Low Density Parity Check (LDPC) codes. Original version of the proposed algorithms named Search Based Weighted Multi Bit Flipping (SWMBF). The main idea of these algorithms is flipping variable multi bits in each iteration, change in which leads to the syndrome vector with least hamming weight. To achieve this, the proposed algo...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2006