نتایج جستجو برای: a hidden layer with 24 nodes

تعداد نتایج: 15649710  

2011
Rui Zhang Yuan Lan Guang-Bin Huang Yeng Chai Soh

The extreme learning machines (ELMs) have been proposed for generalized single-hidden-layer feedforward networks (SLFNs) which need not be neuron alike and perform well in both regression and classification applications. An active topic in ELMs is how to automatically determine network architectures for given applications. In this paper, we propose an extreme learning machine with adaptive grow...

Journal: :IEEE transactions on neural networks 2006
Guang-Bin Huang Lei Chen Chee Kheong Siew

According to conventional neural network theories, single-hidden-layer feedforward networks (SLFNs) with additive or radial basis function (RBF) hidden nodes are universal approximators when all the parameters of the networks are allowed adjustable. However, as observed in most neural network implementations, tuning all the parameters of the networks may cause learning complicated and inefficie...

2011
Behzad Behzadan

Mutual information neuro-evolutionary system (MINES) presents a novel self-governing approach to determine the optimal quantity and connectivity of the hidden layer of a three layer feed-forward neural network founded on theoretical and practical basis. The system is a combination of a feed-forward neural network, back-propagation algorithm, genetic algorithm, mutual information and clustering....

1999
Karl-Heinz Temme Ralph Heider Claudio Moraga

Neuro-fuzzy modeling has been intensively studied since the early nineties. Recently a method has been disclosed, that uses a classical feedforward neural network with just one hidden layer. Nodes of the hidden layer use the logistic function as activation function meanwhile the output node has a linear activation function. This paper introduces a generalization of the logistic function and eva...

2016
Amit Deshpande Sushrut Karmalkar

The universal approximation theorem for neural networks says that any reasonable function is well-approximated by a two-layer neural network with sigmoid gates but it does not provide good bounds on the number of hidden-layer nodes or the weights. However, robust concepts often have small neural networks in practice. We show an efficient analog of the universal approximation theorem on the bool...

2005
Mahmut Sinecen Metehan Makinaci

purpose of this paper is to assess the value of neural networks for classification of cancer and noncancer prostate cells. Gauss Markov Random Fields, Fourier entropy and wavelet average deviation features are calculated from 80 noncancer and 80 cancer prostate cell nuclei. For classification, artificial neural network techniques which are multilayer perceptron, radial basis function and learni...

2012
Stavros P. Adam George D. Magoulas Michael N. Vrahatis

Designing a feed-forward neural network with optimal topology in terms of complexity (hidden layer nodes and connections between nodes) and training performance has been a matter of considerable concern since the very beginning of neural networks research. Typically, this issue is dealt with by pruning a fully interconnected network with “many” nodes in the hidden layers, eliminating “superfluo...

پایان نامه :وزارت علوم، تحقیقات و فناوری - دانشگاه رازی - دانشکده علوم 1391

1. to determine whether difference in birth body mass influenced growth performance in pipistrellus kuhlii we studied a total of 12 captive-born neonates. bats were assigned to two body mass groups: light birth body mass (lbw: 0.89 ± 0.05, n=8) and heavy birth body mass (hbw: 1.35 ± 0.08, n=4). heavier body mass at birth was associated with rapid postnatal growth (body mass and forearm length) ...

1988
Richard P. Lippmann

A nonlinearity is required before matched filtering in mInimum error receivers when additive noise is present which is impulsive and highly non-Gaussian. Experiments were performed to determine whether the correct clipping nonlinearity could be provided by a single-input singleoutput multi-layer perceptron trained with back propagation. It was found that a multi-layer perceptron with one input ...

Journal: :Neurocomputing 1998
Guido Bugmann

Abstract: The performances of Normalised RBF (NRBF) nets and standard RBF nets are compared in simple classification and mapping problems. In Normalized RBF networks, the traditional roles of weights and activities in the hidden layer are switched. Hidden nodes perform a function similar to a Voronoi tessellation of the input space, and the output weights become the network's output over the pa...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید