نتایج جستجو برای: hidden layer

تعداد نتایج: 345063  

2005
Stefan Wermter Cornelius Weber Mark Elshaw Vittorio Gallese Friedemann Pulvermüller

In this paper we describe two models for neural grounding of robotic language processing in actions. These models are inspired by concepts of the mirror neuron system in order to produce learning by imitation by combining high-level vision, language and motor command inputs. The models learn to perform and recognise three behaviours, ‘go’, ‘pick’ and ‘lift’. The first single-layer model uses an...

2004
Zoran Obradović Rangarajan Srikumar R. Srikumar

In a companion paper, a constructive approach for designing feedforward neural networks using genetic algorithms is proposed [7, 8]. The algorithm constructs networks with close to optimum size growing hidden layer units in a problem specific manner and has very good generalization properties. In this paper, in order to make the constructive design algorithm computationally efficient, a two sta...

2012
Chia-Ling Chang Chung-Sheng Liao

The present study focuses on the discussion over the parameter of Artificial Neural Network (ANN). Sensitivity analysis is applied to assess the effect of the parameters of ANN on the prediction of turbidity of raw water in the water treatment plant. The result shows that transfer function of hidden layer is a critical parameter of ANN. When the transfer function changes, the reliability of pre...

Journal: :Pattern Recognition Letters 2009
François Fleuret

We extend the standard boosting procedure to train a two-layer classifier dedicated to handwritten character recognition. The scheme we propose relies on a hidden layer which extracts feature vectors on a fixed number of points of interest, and an output layer which combines those feature vectors and the point of interest locations into a final classification decision. Our main contribution is ...

1991
Lai-Wan Chan

The internal representation of the training patterns of multi-layer perceptrons was examined and we demonstrated that the connection weights between layers are eeectively transforming the representation format of the information from one layer to another one in a meaningful way. The internal code, which can be in analog or binary form, is found to be dependent on a number of factors, including ...

2005
Mahmut Sinecen Metehan Makinaci

purpose of this paper is to assess the value of neural networks for classification of cancer and noncancer prostate cells. Gauss Markov Random Fields, Fourier entropy and wavelet average deviation features are calculated from 80 noncancer and 80 cancer prostate cell nuclei. For classification, artificial neural network techniques which are multilayer perceptron, radial basis function and learni...

2017
Namhoon Lee Wongun Choi Paul Vernaza Christopher B. Choy Philip H. S. Torr Manmohan Chandraker

RNN Encoder 1 is responsible for encoding the past motion of the individual agent. The encoder is implemented with a temporal convolution layer (a.k.a, 1D convolution) that is followed by an RNN with GRU cells [1]. Before putting the past trajectory (Xi) to the temporal convolution layer, we subtract the last state value (present location) from xi,t at all time steps for a translation invarianc...

Journal: :journal of chemical and petroleum engineering 2014
aliakbar heydari fazel dolati mojtaba ahmadi yasser vasseghian

in this study, activated sludge process for wastewater treatment in a refinery was investigated. for such purpose, a laboratory scale rig was built. the effect of several parameters such as temperature, residence time, effect of leca (filling-in percentage of the reactor by leca) and uv radiation on cod removal efficiency were experimentally examined. maximum cod removal efficiency was obtained...

2011
Virendra P. Vishwakarma M. N. Gupta R. Chellappa C. L. Wilson V. P. Vishwakarma S. Pandey K. Choi K. A. Toh C. L. Giles A. C. Tsoi

For high dimensional pattern recognition problems, the learning speed of gradient based training algorithms (back-propagation) is generally very slow. Local minimum, improper learning rate and over-fitting are some of the other issues. Extreme learning machine was proposed as a non-iterative learning algorithm for single-hidden layer feed forward neural network (SLFN) to overcome these issues. ...

2017
Alexandre Salle Aline Villavicencio

Increasing the capacity of recurrent neural networks (RNN) usually involves augmenting the size of the hidden layer, resulting in a significant increase of computational cost. An alternative is the recurrent neural tensor network (RNTN), which increases capacity by employing distinct hidden layer weights for each vocabulary word. However, memory usage scales linearly with vocabulary size, which...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید