نتایج جستجو برای: hidden layer

تعداد نتایج: 345063  

2017
Justin Johnson Bharath Hariharan Laurens van der Maaten Judy Hoffman Li Fei-Fei C. Lawrence Zitnick Ross Girshick

In all experiments our program generator is an LSTM sequence-to-sequence model [9]. It comprises two learned recurrent neural networks: the encoder receives the naturallanguage question as a sequence of words, and summarizes the question as a fixed-length vector; the decoder receives this fixed-length vector as input and produces the predicted program as a sequence of functions. The encoder and...

2017
Afshin Rahimi Trevor Cohn Timothy Baldwin

We propose a simple yet effective textbased user geolocation model based on a neural network with one hidden layer, which achieves state of the art performance over three Twitter benchmark geolocation datasets, in addition to producing word and phrase embeddings in the hidden layer that we show to be useful for detecting dialectal terms. As part of our analysis of dialectal terms, we release DA...

A three-layer artificial neural network (ANN) model was developed to predict the remained DO (deoxygenation) in water after DO removal with an enzymatic granular biocatalyst (GB), based on the experimental data obtained in a laboratory stirring batch study. The effects of operational parameters such as initial pH, initial glucose concentration and temperature on DO removal were investigated. On...

2010
Amit Choudhary Savita Ahlawat Rahul Rishi

The purpose of this work is to analyze the performance of back-propagation feed-forward algorithm using various different activation functions for the neurons of hidden and output layer and varying the number of neurons in the hidden layer. For sample creation, 250 numerals were gathered form 35 people of different ages including male and female. After binarization, these numerals were clubbed ...

1991
Keisuke Kameyama Yukio Kosugi

– A three-layered neural network that optimally self-adjusts the number of hidden layer units is proposed. The network combines two techniques : 1) Unit fusion which enables an efficient pruning of the redundant units. 2) Linear transformations applied to the chosen hidden layer unit pair output and a modified back-propagation training rule for gradual fusion to reduce pruning shocks. The netwo...

Journal: :Neural computation 2016
Namig J. Guliyev Vugar E. Ismailov

The possibility of approximating a continuous function on a compact subset of the real line by a feedforward single hidden layer neural network with a sigmoidal activation function has been studied in many papers. Such networks can approximate an arbitrary continuous function provided that an unlimited number of neurons in a hidden layer is permitted. In this note, we consider constructive appr...

Journal: :IEEE transactions on neural networks 1993
Amir F. Atiya Yaser S. Abu-Mostafa

A method for the storage of analog vectors, i.e., vectors whose components are real-valued, is developed for the Hopfield continuous-time network. An important requirement is that each memory vector has to be an asymptotically stable (i.e. attractive) equilibrium of the network. Some of the limitations imposed by the continuous Hopfield model on the set of vectors that can be stored are pointed...

Journal: :CoRR 2016
Jacob S. Hunter Nathan O. Hodas

Deep nonlinear models pose a challenge for fitting parameters due to lack of knowledge of the hidden layer and the potentially non-affine relation of the initial and observed layers. In the present work we investigate the use of information theoretic measures such as mutual information and Kullback-Leibler (KL) divergence as objective functions for fitting such models without knowledge of the h...

2017
Milena Rabovsky Steven Stenberg Hansen James L. McClelland

Why do neural responses decrease with practice? We used a predictive neural network model of sentence processing (St. John &McClelland, 1990) to simulate neural responses during language understanding, and examined the model’s correlate of neural responses (specifically, the N400 component), measured as stimulus-induced change in hidden layer activation, across training. N400 magnitude first in...

2015
Senjian An Farid Boussaïd Mohammed Bennamoun

This paper investigates how hidden layers of deep rectifier networks are capable of transforming two or more pattern sets to be linearly separable while preserving the distances with a guaranteed degree, and proves the universal classification power of such distance preserving rectifier networks. Through the nearly isometric nonlinear transformation in the hidden layers, the margin of the linea...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید