نتایج جستجو برای: recurrent input

تعداد نتایج: 345825  

2009
Gabriella Nyitrai László Héja Julianna Kardos

25 Here we address how dynamics of glutamatergic and GABAergic synaptic input to CA3 26 pyramidal cells contribute to spontaneous emergence and evolution of recurrent seizure-like 27 events (SLEs) in juvenile (P10-13) rat hippocampal slices bathed in low-[Mg] artificial 28 cerebrospinal fluid. In field potential recordings from CA3 pyramidal layer a short epoch of 29 high frequency oscillation ...

1996
Ingrid Kirschning Hideto Tomabechi Jun-Ichi Aoe

We developed a method called Time-Slicing [1] for the analysis of the speech signal. It enables a neural network to recognize connected speech as it comes, without having to fit the input signal into a fixed time-format, nor label or segment it phoneme by phoneme. The neural network produces an immediate hypothesis of the recognized phoneme and its size is small enough to run even on a PC. To i...

2016
Pushpendre Rastogi Ryan Cotterell Jason Eisner

How should one apply deep learning to tasks such as morphological reinflection, which stochastically edit one string to get another? A recent approach to such sequence-to-sequence tasks is to compress the input string into a vector that is then used to generate the output string, using recurrent neural networks. In contrast, we propose to keep the traditional architecture, which uses a finite-s...

1994
Barry L. Kalman Stan C. Kwasny

TRAINREC is a system for training feedforward and recurrent neural networks that incorporates several ideas. It uses the conjugate-gradient method which is demonstrably more efficient than traditional backward error propagation. We assume epoch-based training and derive a new error function having several desirable properties absent from the traditional sum-of-squares-error function. We argue f...

2015
Christian Rössert Paul Dean John Porrill

Models of the cerebellar microcircuit often assume that input signals from the mossy-fibers are expanded and recoded to provide a foundation from which the Purkinje cells can synthesize output filters to implement specific input-signal transformations. Details of this process are however unclear. While previous work has shown that recurrent granule cell inhibition could in principle generate a ...

2016
Sumit Chopra Michael Auli Alexander M. Rush

Abstractive Sentence Summarization generates a shorter version of a given sentence while attempting to preserve its meaning. We introduce a conditional recurrent neural network (RNN) which generates a summary of an input sentence. The conditioning is provided by a novel convolutional attention-based encoder which ensures that the decoder focuses on the appropriate input words at each step of ge...

Journal: :Intell. Data Anal. 1998
Sylvian R. Ray William H. Hsu

We investigate a form of modular neural network for classification with (a) pre-separated input vectors entering its specialist (expert) networks, (b) specialist networks which are selforganized (radial-basis function or self-targeted feedforward type) and (c) which fuses (or integrates) the specialists with a single-layer net. When the modular architecture is applied to spatiotemporal sequence...

1994
Morten With Pedersen Lars Kai Hansen

Second order properties of cost functions for recurrent networks are investigated. We analyze a layered fully recurrent architecture, the virtue of this architecture is that it features the conventional feedforward architecture as a special case. A detailed description of recursive computation of the full Hessian of the network cost function is provided. We discuss the possibility of invoking s...

2014
Benjamin Dummer Stefan Wieland Benjamin Lindner

A major source of random variability in cortical networks is the quasi-random arrival of presynaptic action potentials from many other cells. In network studies as well as in the study of the response properties of single cells embedded in a network, synaptic background input is often approximated by Poissonian spike trains. However, the output statistics of the cells is in most cases far from ...

Journal: :IEEE Trans. Fuzzy Systems 2002
Chia-Feng Juang

In this paper, a TSK-type recurrent fuzzy network (TRFN) structure is proposed. The proposal calls for a design of TRFN by either neural network or genetic algorithms depending on the learning environment. Set forth first is a recurrent fuzzy network which develops from a series of recurrent fuzzy if–then rules with TSK-type consequent parts. The recurrent property comes from feeding the intern...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید