Why Neural Networks Are Computationally Efficient Approximators: An Explanation

نویسندگان

  • Jaime Nava
  • Vladik Kreinovich
چکیده

Many real-life dependencies can be reasonably accurately described by linear functions. If we want a more accurate description, we need to take non-linear terms into account. To take nonlinear terms into account, we can either explicitly add quadratic terms to the regression equation, or, alternatively, we can use a neural network with a non-linear activation function. At first glance, regression algorithms would work faster, but in practice, often, a neural network approximation turns out to be a more computationally efficient one. In this paper, we provide a reasonable explanation for this empirical fact. 1 Formulation of the Problem Practical need to find dependencies. In practice, it often occurs that we know (or conjecture) that a quantity y depends on quantities x1, . . . , xn, but we do not know the exact form of this dependence. In such situations, we must experimentally determine this dependence y = f(x1, . . . , xn). For that, in several (S) situations s = 1, . . . , N , we measure the values of both the dependent variable y and of the independent variables xi. Then, we use the results ( x (s) 1 , . . . , x (s) n , y ) of these measurements to find a function f(x1, . . . , xn) which is consistent with all these measurement results, i.e., for which y ≈ f ( x (s) 1 , . . . , x (s) n ) for all s from 1 to S. (The equality is usually approximate since the measurements are approximate and the value y is often only approximately determined by the values of the variables x1, . . . , xn.)

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Wavelet Neural Networks Are Asymptotically Optimal Approximators for Functions of One Variable

| Neural networks are universal approximators. For example, it has been proved (Hornik et al) that for every " > 0, an arbitrary continuous function on a compact set can be "?approximated by a 3-layer neural network. This and other results prove that in principle, any function (e.g., any control) can be implemented by an appropriate neural network. But why neural net-works? In addition to neura...

متن کامل

STRUCTURAL DAMAGE DETECTION BY MODEL UPDATING METHOD BASED ON CASCADE FEED-FORWARD NEURAL NETWORK AS AN EFFICIENT APPROXIMATION MECHANISM

Vibration based techniques of structural damage detection using model updating method, are computationally expensive for large-scale structures. In this study, after locating precisely the eventual damage of a structure using modal strain energy based index (MSEBI), To efficiently reduce the computational cost of model updating during the optimization process of damage severity detection, the M...

متن کامل

High-accuracy value-function approximation with neural networks applied to the acrobot

Several reinforcement-learning techniques have already been applied to the Acrobot control problem, using linear function approximators to estimate the value function. In this paper, we present experimental results obtained by using a feedforward neural network instead. The learning algorithm used was model-based continuous TD(λ). It generated an efficient controller, producing a high-accuracy ...

متن کامل

Biologically Sound Neural Networks for Embedded Systems Using OpenCL

Artificial neural networks (ANNs) are general function approximators and noise resistant, and therefore popular in many applications. Researchers in the field of computational intelligence have shown that biologically sound spiking neural networks (SNNs) are comparable, or even more powerful than traditional artificial neural networks(ANNs) [1]. However, such neural networks are usually computa...

متن کامل

Cascade-Correlation Neural Networks: A Survey

This paper is an overview of cascade-correlation neural networks which form a specific class inside neural network function approximators. They are based on a special architecture which autonomously adapts to the application and makes the training much more efficient than the widely used backpropagation algorithm. This survey describes the cascade-correlation architecture variants, shows import...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2011