نتایج جستجو برای: hidden layer

تعداد نتایج: 345063  

Journal: :Connect. Sci. 2007
Frédéric Dandurand V. Berthiaume Thomas R. Shultz

Cascade-correlation (cascor) networks grow by recruiting hidden units to adjust their computational power to the task being learned. The standard cascor algorithm recruits each hidden unit on a new layer, creating deep networks. In contrast, the flat cascor variant adds all recruited hidden units on a single hidden layer. Student-teacher network approximation tasks were used to investigate the ...

2017
WenBo Xiao Gina Nazario HuaMing Wu HuaMing Zhang Feng Cheng

In this article, we introduced an artificial neural network (ANN) based computational model to predict the output power of three types of photovoltaic cells, mono-crystalline (mono-), multi-crystalline (multi-), and amorphous (amor-) crystalline. The prediction results are very close to the experimental data, and were also influenced by numbers of hidden neurons. The order of the solar generati...

2014
Weikuan Jia Dean Zhao Tian Shen Chunyang Su Chanli Hu Yuyan Zhao

When confronting the complex problems, radial basis function (RBF) neural network has the advantages of adaptive and self-learning ability, but it is difficult to determine the number of hidden layer neurons, and the weights learning ability from hidden layer to the output layer is low; these deficiencies easily lead to decreasing learning ability and recognition precision. Aiming at this probl...

2001
Derong Liu Tsu-Shuan Chang Yi Zhang

We develop, in this brief, a new constructive learning algorithm for feedforward neural networks. We employ an incremental training procedure where training patterns are learned one by one. Our algorithm starts with a single training pattern and a single hidden-layer neuron. During the course of neural network training, when the algorithm gets stuck in a local minimum, we will attempt to escape...

Journal: :Neural networks : the official journal of the International Neural Network Society 2003
Zhaozhi Zhang Xiaomin Ma Yixian Yang

This paper investigates an important problem concerning the complexity of three-layer binary neural networks (BNNs) with one hidden layer. The neuron in the studied BNNs employs a hard limiter activation function with only integer weights and an integer threshold. The studies are focused on implementations of arbitrary Boolean functions which map from [0, 1]n into [0, 1]. A deterministic algori...

Journal: :CoRR 2014
Adriana Romero Nicolas Ballas Samira Ebrahimi Kahou Antoine Chassang Carlo Gatta Yoshua Bengio

While depth tends to improve network performances, it also makes gradient-based training more difficult since deeper networks tend to be more non-linear. The recently proposed knowledge distillation approach is aimed at obtaining small and fast-to-execute models, and it has shown that a student network could imitate the soft output of a larger teacher network or ensemble of networks. In this pa...

2010
Felix Pasila

Inverse kinematics analysis plays an important role in developing a robot manipulator. But it is not too easy to derive the inverse kinematic equation of a robot manipulator especially robot manipulator which has numerous degree of freedom. This paper describes an application of Artificial Neural Network for modeling the inverse kinematics equation of a robot manipulator. In this case, the robo...

2016
Amit Deshpande Sushrut Karmalkar

The universal approximation theorem for neural networks says that any reasonable function is well-approximated by a two-layer neural network with sigmoid gates but it does not provide good bounds on the number of hidden-layer nodes or the weights. However, robust concepts often have small neural networks in practice. We show an efficient analog of the universal approximation theorem on the bool...

2000
Eiji Mizutani Stuart E. Dreyfus Kenichi Nishio

The well-known backpropagation (BP) derivative computation process for multilayer perceptrons (MLP) learning can be viewed as a simplified version of the Kelley-Bryson gradient formula in the classical discrete-time optimal control theory [1]. We detail the derivation in the spirit of dynamic programming, showing how they can serve to implement more elaborate learning whereby teacher signals ca...

1994
Jan Matti Lange Hans-Michael Voigt

| With this paper we propose a learning architecture for growing complex ar-tiicial neural networks. The complexity of the growing network is adapted automatically according to the complexity of the task. The algorithm generates a feed forward network bottom up by cyclically inserting cascaded hidden layers. Inputs of a hidden layer unit are locally restricted with respect to the input space by...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید