نتایج جستجو برای: hidden layer

تعداد نتایج: 345063  

2016
Frantisek Grézl Martin Karafiát

Stacked-Bottle-Neck (SBN) feature extraction is a crucial part of modern automatic speech recognition (ASR) systems. The SBN network traditionally contains a hidden layer between the BN and output layers. Recently, we have observed that an SBN architecture without this hidden layer (i.e. direct BN-layer – output-layer connection) performs better for a single language but fails in scenarios wher...

2009
André Eugênio Lazzaretti Fábio Alessandro Guerra Leandro dos Santos

The identification of nonlinear systems by artificial neural networks has been successfully applied in many applications. In this context, the radial basis function neural network (RBF-NN) is a powerful approach for nonlinear identification. A RBF neural network has an input layer, a hidden layer and an output layer. The neurons in the hidden layer contain Gaussian transfer functions whose outp...

2016
R. Ben Abdennour M. Ltaïef

Neural networks are significantly used in signal and image processing techniques for pattern recognition and template matching. In this work neural networks are used for image compression. In order to improve the performances image compression algorithm, DWT is combined with NN for achieving better MSE and increase in compression ration greater than 100%. NN architecture achieves maximum of 98%...

2009
Sultan Noman Qasem Siti Mariyam Hj. Shamsuddin

This study proposes RBF Network hybrid learning with Particle Swarm Optimization for better convergence, error rates and classification results. In conventional RBF Network structure, different layers perform different tasks. Hence, it is useful to split the optimization process of hidden layer and output layer of the network accordingly. RBF Network hybrid learning involves two phases. The fir...

Journal: :Physical review. E, Statistical, nonlinear, and soft matter physics 2002
Michal Rosen-Zvi Andreas Engel Ido Kanter

The generalization ability and storage capacity of a treelike two-layered neural network with a number of hidden units scaling as the input dimension is examined. The mapping from the input to the hidden layer is via Boolean functions; the mapping from the hidden layer to the output is done by a perceptron. The analysis is within the replica framework where an order parameter characterizing the...

Journal: :CoRR 2015
Guido Montúfar

We establish upper bounds for the minimal number of hidden units for which a binary stochastic feedforward network with sigmoid activation probabilities and a single hidden layer is a universal approximator of Markov kernels. We show that each possible probabilistic assignment of the states of n output units, given the states of k ≥ 1 input units, can be approximated arbitrarily well by a netwo...

Journal: :Computer and Information Science 2011
Mazen Abu-Zaher

Steganography is the science of writing hidden messages is such a way that no one except sender and intended recipient can realize there is a hidden message. Compared with cryptography we can say that, steganography’s in security work as supplement to cryptography, not replace it. So to add another layer of protection we can encrypt the hidden message (Johnson & Jajodia, 1998). A survey of curr...

Journal: :CoRR 2017
Thomas Epelbaum

x R el u (x ) ReLU function and its derivative ReLU(x) ReLU’(x) h (0) 0 Bias h (0) 1 Input #2 h (0) 2 Input #3 h (0) 3 Input #4 h (0) 4 Input #5 h (0) 5 Input #6 h (1) 0 h (1) 1 h (1) 2 h (1) 3 h (1) 4 h (1) 5 h (1) 6 h (h) 0 h (h) 1 h (h) 2 h (h) 3 h (h) 4 h (h) 5 h (N) 1 Output #1 h (N) 2 Output #2 h (N) 3 Output #3 h (N) 4 Output #4 h (N) 5 Output #5 Hidden layer 1 Input layer Hidden layer h...

1995
Rajesh P.N. Rao

We present a biologically-motivated approach to the problem of perception-based homing by an autonomous mobile robot. A three-layered self-organizing network is used to autonomously learn the desired mapping from perceptions to actions. The network, which bears some similarities to the structure of the mammalian cerebellum, is initially trained by teleoperating the robot on a small number of ho...

2007
Simon Osindero Geoffrey E. Hinton

We describe an efficient learning procedure for multilayer generative models that combine the best aspects of Markov random fields and deep, directed belief nets. The generative models can be learned one layer at a time and when learning is complete they have a very fast inference procedure for computing a good approximation to the posterior distribution in all of the hidden layers. Each hidden...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید