نتایج جستجو برای: perceptrons

تعداد نتایج: 1707  

1997
P. Moerland E. Fiesler I. Saxena

All-optical multilayer perceptrons diier in various ways from the ideal neural network model. Examples are the use of non-ideal activation functions which are truncated, asymmetric, and have a non-standard gain, restriction of the network parameters to non-negative values, and the limited accuracy of the weights. In this paper, a backpropagation-based learning rule is presented that compensates...

This paper presents the results of Persian handwritten word recognition based on Mixture of Experts technique. In the basic form of ME the problem space is automatically divided into several subspaces for the experts, and the outputs of experts are combined by a gating network. In our proposed model, we used Mixture of Experts Multi Layered Perceptrons with Momentum term, in the classification ...

Journal: :Neurocomputing 2007
Madalina Olteanu Joseph Rynkiewicz

BIC criterion is widely used by the neural-network community for model selection tasks, although its convergence properties are not always theoretically established. In this paper we will focus on estimating the number of components in a mixture of multilayer perceptrons and proving the convergence of the BIC criterion in this frame. The penalized marginal-likelihood for mixture models and hidd...

Journal: :Neurocomputing 2006
Joseph Rynkiewicz

This work concerns the estimation of multidimensional nonlinear regression models using multilayer perceptrons (MLPs). The main problem with such models is that we need to know the covariance matrix of the noise to get an optimal estimator. However, we show in this paper that if we choose as the cost function the logarithm of the determinant of the empirical error covariance matrix, then we get...

2013
Hannes Schulz Kyunghyun Cho Tapani Raiko Sven Behnke

Supervised training of multi-layer perceptrons (MLP) with only few labeled examples is prone to overfitting. Pretraining an MLP with unlabeled samples of the input distribution may achieve better generalization. Usually, pretraining is done in a layer-wise, greedy fashion which limits the complexity of the learnable features. To overcome this limitation, two-layer contractive encodings have bee...

1995
Nikolai K. Vereshchagin

We prove that perceptrons separating Boolean matrices in which each row has a one from matrices in which many rows have no one must have either large total weight or large order. This result extends one-in-a-box theorem by Minsky and Papert 13] stating that perceptrons of small order cannot decide if each row of a given Boolean matrix has a one. As a consequence, we prove that AM \ co-AM 6 6 PP...

2006
Maciej Grzenda Bohdan Macukow

In many technical issues, the processes of interest could be precisely modelled if only all the relevant information were available. On the other hand, detailed modelling is frequently not feasible due to the cost of acquiring appropriate data. The paper discusses the way self-organising maps and multilayer perceptrons can be used to develop two-stage algorithm for autonomous construction of pr...

Journal: :IEEE transactions on neural networks 2000
Guang-Bin Huang Yan Qiu Chen Haroon Atique Babri

Multilayer perceptrons with hard-limiting (signum) activation functions can form complex decision regions. It is well known that a three-layer perceptron (two hidden layers) can form arbitrary disjoint decision regions and a two-layer perceptron (one hidden layer) can form single convex decision regions. This paper further proves that single hidden layer feedforward neural networks (SLFN's) wit...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید