نتایج جستجو برای: forward and feed

تعداد نتایج: 16842896  

Journal: :Pattern Recognition Letters 1997
Aarnoud Hoekstra Robert P. W. Duin

Ž . In this article we will focus on how we can investigate read visualise the clustering behaviour of neurons during training. This clustering property has already been investigated before, by Annema, Vogtlander and Schmidt. However, we ̈ will present a different approach in visualisation illustrated by experiments performed on two-class problems. q 1997 Elsevier Science B.V.

1990
Marco Budinich Edoardo Milotti

The convex hull of any subset o f vertices of an n-dimensional hypercube contains no other vertex of the hypercube. This result permits the application of some theorems of n-dimensional geometry lo digital reed-forward neural networks. Also. the construction Of the convex hull is proposed as an alternative to more traditional learning algorithms. Some preliminary simulation results are reponed.

Journal: :CoRR 2015
Han Xiao Xiaoyan Zhu

The outputs of non-linear feed-forward neural network are positive, which could be treated as probability when they are normalized to one. If we take Entropy-Based Principle into consideration, the outputs for each sample could be represented as the distribution of this sample for different clusters. Entropy-Based Principle is the principle with which we could estimate the unknown distribution ...

2002
F.-J. Decker A. Fisher L. Hendrickson K. E. Krauter B. Murphy S. Weathersby U. Wienands

The PEP-II B-Factory achieved design performances in 2000. The tune shifts of the rings are already about twice the design numbers of 0.03. This requires constant adjustments from the operators during fills and top-offs. A tune feedback was envisioned first, but the wide, multi-peaked tune signals make it tricky even for a human to adjust the tunes correctly. Since tunes are strongly correlated...

2012
Grégoire Montavon Mikio L. Braun Klaus-Robert Müller

The deep Boltzmann machine is a powerful model that extracts the hierarchical structure of observed data. While inference is typically slow due to its undirected nature, we argue that the emerging feature hierarchy is still explicit enough to be traversed in a feedforward fashion. The claim is corroborated by training a set of deep neural networks on real data and measuring the evolution of the...

Journal: :CoRR 2015
Han Xiao Xiaoyan Zhu

Margin-Based Principle has been proposed for a long time, it has been proved that this principle could reduce the structural risk and improve the performance in both theoretical and practical aspects. Meanwhile, feed-forward neural network is a traditional classifier, which is very hot at present with a deeper architecture. However, the training algorithm of feed-forward neural network is devel...

2011
Sidney P. Kuo Laurence O. Trussell

Inhibitory interneurons across diverse brain regions commonly exhibit spontaneous spiking activity, even in the absence of external stimuli. It is not well understood how stimulus-evoked inhibition can be distinguished from background inhibition arising from spontaneous firing. We found that noradrenaline simultaneously reduced spontaneous inhibitory inputs and enhanced evoked inhibitory curren...

2012
Bernard M.C. Stienen Konrad Schindler Beatrice de Gelder

Given the presence of massive feedback loops in brain networks, it is difficult to disentangle the contribution of feed-forward and feedback processing to the recognition of visual stimuli, in this case, of emotional body expressions. The aim of the present work is to shed light on how well feed-forward processing explains rapid categorization of this important class of stimuli. By means of par...

Journal: :CoRR 2015
K. Eswaran Vishwajeet Singh

This paper introduces a new method which employs the concept of “Orientation Vectors” to train a feed forward neural network. It is shown that this method is suitable for problems where large dimensions are involved and the clusters are characteristically sparse. For such cases, the new method is not NP hard as the problem size increases. We ‘derive’ the present technique by starting from Kolmo...

2017
Alexander Dekhtyar

Perceptrons. A perceptron is a linear classifier of the form y = sign(σ i=1wixi+ b) where the weights w = (w1, . . . , wd) are trained using stochastic gradient descent. A perceptron is guaranteed to converge to some hyperplane separating two classes if the two classes are linearly separable (i.e., if there exists at least one hyperplane such that all points from Class 1 are on one side of it a...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید