نتایج جستجو برای: recurrent input

تعداد نتایج: 345825  

پایان نامه :0 1374

this experimental study has been conducted to test the effect of oral presentation on the development of l2 learners grammar. but this oral presentation is not merely a deductive instruction of grammatical points, in this presentation two hypotheses of krashen (input and low filter hypotheses), stevicks viewpoints on grammar explanation and correction and widdowsons opinion on limited use of l1...

1995
Salah El Hihi Yoshua Bengio

We have already shown that extracting long-term dependencies from sequential data is difficult, both for determimstic dynamical systems such as recurrent networks, and probabilistic models such as hidden Markov models (HMMs) or input/output hidden Markov models (IOHMMs). In practice, to avoid this problem, researchers have used domain specific a-priori knowledge to give meaning to the hidden or...

2016
Anton Ragni Edgar Dakin Xie Chen Mark J. F. Gales Kate Knill

In recent years there has been considerable interest in neural network based language models. These models typically consist of vocabulary dependent input and output layers and one, or more, hidden layers. A standard problem with these networks is that large quantities of training data are needed to robustly estimate the model parameters. This poses a challenge when only limited data is availab...

2013
Moritz Augustin Josef Ladenbauer Klaus Obermayer

Neural mass signals from in-vivo recordings often show oscillations with frequencies ranging from <1 to 100 Hz. Fast rhythmic activity in the beta and gamma range can be generated by network-based mechanisms such as recurrent synaptic excitation-inhibition loops. Slower oscillations might instead depend on neuronal adaptation currents whose timescales range from tens of milliseconds to seconds....

Journal: :Simulation 2002
Si Jong Choi Tag Gon Kim

The authors consider identifying an unknown discrete event system (DES) as recognition of characteristic functions of a discrete event systems specification (DEVS) model that validly represents the system. Such identification consists of two major steps: behavior learning using a specially designed neural network and extraction of a DEVS model from the learned neural network. This paper present...

2010
Christian W. Rempis Frank Pasemann

Evolving recurrent neural networks for behavior control of robots equipped with larger sets of sensors and actuators is difficult due to the large search spaces that come with the larger number of input and output neurons. We propose constrained modularization as a novel technique to reduce the search space for such evolutions. Appropriate neural networks are divided manually into logically and...

1993
Yoshua Bengio Paolo Frasconi

Learning to recognize or predict sequences using long-term context has many applications. However, practical and theoretical problems are found in training recurrent neural networks to perform tasks in which input/output dependencies span long intervals. Starting from a mathematical analysis of the problem, we consider and compare alternative algorithms and architectures on tasks for which the ...

1999
Jochen J. Steil Helge Ritter

We analyse the stability of the input-output behaviour of a recurrent network. It is trained to implement an operator implicitly given by the chaotic dynamics of the Roessler attractor. Two of the attractors coordinate functions are used as network input and the third defines the reference output. Using recently developed new methods we show that the trained network is input-output stable and c...

Journal: :Behavioural brain research 1997
M E Hasselmo B P Wyble

Free recall and recognition are simulated in a network model of the hippocampal formation, incorporating simplified simulations of neurons, synaptic connections, and the effects of acetylcholine. Simulations focus on modeling the effects of the acetylcholine receptor blocker scopolamine on human memory. Systemic administration of scopolamine is modeled by blockade of the cellular effects of ace...

Journal: :CoRR 2017
Yoav Levine Or Sharir Amnon Shashua

The key attribute that drives the unprecedented success of modern Recurrent Neural Networks (RNNs) on learning tasks which involve sequential data, is their ever-improving ability to model intricate long-term temporal dependencies. However, an adequate measure of RNNs long-term memory capacity is lacking, and thus formal understanding of their ability to correlate data throughout time is limite...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید