Optimal Universal Parallel Computer, Neural Networks, and Kolmogorov’s Theorem

نویسندگان

  • David A. Sprecher
  • Andrew Bernat
  • Vladik Kreinovich
  • Luc Longpré
چکیده

How can we design the fastest parallel computer architecture that is universal in the sense that it will be able to perform arbitrary computations? To make computations faster, we must divide them into the simplest possible (and thus, fastest possible) processing elements working in parallel. The simplest possible operation with real numbers x1, ..., xn is computing their linear combination. However, if we only have these linear processing elements, we will only be able to compute linear functions. So, to make the computer universal, we also need some nonlinear processing elements that compute nonlinear functions y = f(x1, ..., xn). In general, the greater the number n of inputs, the more time it takes to process them and compute n. So, the simplest nonlinear processing element computes a function of one variable f(x). To get a general computation, we must combine the resulting processing elements. The resulting time of parallel computation increases with the number of layers. So, the fewer layers, the faster the computations. We show that: • with one or two layers, we do not get a universal computer; • for an appropriate three-layer architecture, we get a neural architecture that enables us to approximate an arbitrary non-linear function; we also prove that three layers are not sufficient to exactly represent all non-linear functions; • finally, for four layers, Kolmogorov’s superposition theorem enables us to exactly represent all non-linear functions. We also discuss whether these results remain valid if, in addition to computation time (as described by the number of layers), we take communication time into consideration. 1. AN INFORMAL INTRODUCTION: WHAT IS A UNIVERSAL OPTIMAL PARALLEL COMPUTER Why parallel. Many real-life computational problems (weather prediction, etc) take too long to compute. A well known way to speed up computations is to perform them in parallel by dividing them into smaller and thus faster subtasks that can be performed simultaneously rather than sequentially. The simpler these subtasks, the faster they are performed and therefore, the faster the resulting computation. The efficiency of parallel computations essentially depends on the architecture of the corresponding computer. In this paper, we want to decide what architecture is the most suitable. Universal. In principle, we can choose a specific problem (e.g., weather prediction), and find an architecture that is best suited for this particular problem. However, finding such an architecture is in itself a difficult task. Moreover, there are many different computational

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Stabilization of Nonlinear Control Systems through Using Zobov’s Theorem and Neural Networks

Zobov’s Theorem is one of the theorems which indicate the conditions for the stability of a nonlinear system with specific attraction region. We have applied neural networks to approximate some functions mentioned in Zobov’s theorem in order to find the controller of a nonlinear controlled system whose law in a mathematical manner is difficult to make. Finally, the effectiveness and the applica...

متن کامل

Dynamic Sliding Mode Control of Nonlinear Systems Using Neural Networks

Dynamic sliding mode control (DSMC) of nonlinear systems using neural networks is proposed. In DSMC the chattering is removed due to the integrator which is placed before the input control signal of the plant. However, in DSMC the augmented system is one dimension bigger than the actual system i.e. the states number of augmented system is more than the actual system and then to control of such ...

متن کامل

Guaranteed Intervals for Kolmogorov’s Theorem (and Their Possible Relation to Neural Networks)

In 1987, R. Hecht-Nielsen noticed that a theorem that was proved by Kolmogorov in 1957 as a solution to one of Hilbert’s problems, actually shows that an arbitrary function f can be implemented by a 3-layer neural network with appropriate activation functions ψ and χ. The more accurately we implement these functions, the better approximation to f we get. Kolmogorov’s proof can be transformed in...

متن کامل

Joint influence of leakage delays and proportional delays on almost periodic solutions for FCNNs

This paper deals with fuzzy cellular neural networks (FCNNs) with leakage delays and proportional delays. Applying the differential inequality strategy, fixed point theorem and almost periodic function principle, some sufficient criteria which ensure the existence and global attractivity of a unique almost periodic solution for fuzzy cellular neuralnetworks with leakage delays and p...

متن کامل

A Novel Fast Kolmogorov's Spline Complex Network for Pattern Detection

In this paper, we present a new fast specific complex-valued neural network, the fast Kolmogorov’s Spline Complex Network (FKSCN), which might be advantageous especially in various tasks of pattern recognition. The proposed FKSCN uses cross correlation in the frequency domain between the input data and the input weights of neural networks. It is proved mathematically and practically that the nu...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2008