نتایج جستجو برای: feedforward neural networks

تعداد نتایج: 638547  

2000
Bao-Liang Lu Michinori Ichikawa

Various theoretical results show that learning in conventional feedforward neural networks such as multilayer perceptrons is NP-complete. In this paper we show that learning in min-max modular (M3) neural networks is tractable. The key to coping with NP-complete problems in M3 networks is to decompose a large-scale problem into a number of manageable, independent subproblems and to make the lea...

1998
Leandro Nunes de Castro Fernando José Von Zuben LEANDRO NUNES FERNANDO JOSÉ VON ZUBEN

................................................................................................................................................................. 1

2016
Oscar Fontenla-Romero Beatriz Pérez-Sánchez Bertha Guijarro-Berdiñas Diego Rego-Fernández

With the appearance of huge data sets new challenges have risen regarding the scalability and efficiency of Machine Learning algorithms, and both distributed computing and randomized algorithms have become effective ways to handle them. Taking advantage of these two approaches, a distributed learning algorithm for two-layer neural networks is proposed. Results demonstrate a similar accuracy whe...

1997
K. Y. Michael Wong

I consider layered neural networks in which the weights are trained by optimizing an arbitrary performance function with respect to a set of examples. Using the cavity method and many-body diagrammatic techniques, the evolution in the network can be described by an overlap and a noise parameter. Parameter pairs corresponding to various input conditions are found to collapse on a universal curve...

2015
Tze Yuang Chong Rafael E. Banchs Chng Eng Siong Haizhou Li

In this paper, we describe the use of feedforward neural networks to improve the term-distance term-occurrence (TDTO) language model, previously proposed in [1]−[3]. The main idea behind the TDTO model proposition is to model separately both position and occurrence information of words in the history-context to better estimate n-gram probabilities. Neural networks have been shown to offer a bet...

2007
Michael C. Montgomery Jayakrishnan K. Eledath

The principle of maximum information preservation has been successfully used to derive learning algorithms for self-organizing neural networks. In this paper, we state and apply the corresponding principle for supervised networks: the principle of minimum information loss. We do not propose a new learning algorithm, but rather a pruning algorithm which works to achieve minimum information loss ...

1998
David Rios Insua

Feed forward neural networks (FFNN) with an unconstrained random number of hidden neurons deene exible non-parametric regression models. In M uller and Rios Insua (1998) we have argued that variable architecture models with random size hidden layer signiicantly reduce posterior mul-timodality typical for posterior distributions in neural network models. In this chapter we review the model propo...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید