Deep, Narrow Sigmoid Belief Networks Are Universal Approximators

نویسندگان

  • Ilya Sutskever
  • Geoffrey E. Hinton
چکیده

In this note, we show that exponentially deep belief networks can approximate any distribution over binary vectors to arbitrary accuracy, even when the width of each layer is limited to the dimensionality of the data. We further show that such networks can be greedily learned in an easy yet impractical way.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Deep Narrow Boltzmann Machines are Universal Approximators

We show that deep narrow Boltzmann machines are universal approximators of probability distributions on the activities of their visible units, provided they have sufficiently many hidden layers, each containing the same number of units as the visible layer. Besides from this existence statement, we provide upper and lower bounds on the sufficient number of layers and parameters. These bounds sh...

متن کامل

Deep Belief Networks Are Compact Universal Approximators

Deep Belief Networks (DBN) are generative models with many layers of hidden causal variables, recently introduced by Hinton et al. (2006), along with a greedy layer-wise unsupervised learning algorithm. Building on Le Roux and Bengio (2008) and Sutskever and Hinton (2008), we show that deep but narrow generative networks do not require more parameters than shallow ones to achieve universal appr...

متن کامل

The Expressive Power of Neural Networks: A View from the Width

The expressive power of neural networks is important for understanding deep learning. Most existing works consider this problem from the view of the depth of a network. In this paper, we study how width affects the expressiveness of neural networks. Classical results state that depth-bounded (e.g. depth-2) networks with suitable activation functions are universal approximators. We show a univer...

متن کامل

Neural networks with a continuous squashing function in the output are universal approximators

In 1989 Hornik as well as Funahashi established that multilayer feedforward networks without the squashing function in the output layer are universal approximators. This result has been often used improperly because it has been applied to multilayer feedforward networks with the squashing function in the output layer. In this paper, we will prove that also this kind of neural networks are unive...

متن کامل

Refinements of Universal Approximation Results for Deep Belief Networks and Restricted Boltzmann Machines

We improve recently published results about resources of restricted Boltzmann machines (RBM) and deep belief networks (DBN)required to make them universal approximators. We show that any distribution pon the set {0,1}(n) of binary vectors of length n can be arbitrarily well approximated by an RBM with k-1 hidden units, where k is the minimal number of pairs of binary vectors differing in only o...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Neural computation

دوره 20 11  شماره 

صفحات  -

تاریخ انتشار 2008