نتایج جستجو برای: neural optimization

تعداد نتایج: 606462  

1997
Thibault Langlois

Gradient-based methods are often used for optimization. They form the basis of several neural network training algorithms, including backpropagation. They are known to be slow, however. Several techniques exist for the acceleration of gradient-based optimization, but very few of them are applicable to stochastic or real-time optimization. This paper proposes a new step size adaptation technique...

Kaveh, Mehrdad, Khosravi, Ali, Mesgari , Mohammad Saadi ,

Today, the global positioning systems (GPS) do not work well in buildings and in dense urban areas when there is no lines of sight between the user and their satellites. Hence, the local positioning system (LPS) has been considerably used in recent years. The main purpose of this research is to provide a four-layer artificial neural network based on nonlinear system solver (NLANN) for local pos...

Journal: :IEEE transactions on neural networks and learning systems 2016
Zhigang Zeng Andrzej Cichocki Long Cheng Youshen Xia Xiaolin Hu

Recurrent neural networks, as dynamical systems, are usually used as models for solving computationally intensive problems. Because of their inherent nature of parallel and distributed information processing, recurrent neural networks are promising computational models for real-time applications. Constrained optimization problems arise in a wide variety of scientific and engineering application...

2016
Yanxiang Geng Liyi Zhang Yunshan Sun Yao Zhang Nan Yang Jiawei Wu

The project of ant colony algorithm optimization neural network combining blind equalization algorithm is proposed. The better initial weights of neural networks are provided because of the randomness, ergodicity and positive feedback of the ant colony algorithm. And then, a combination of optimal weights are found through BP algorithm, which is fast local search speed. Thus blind equalization ...

2010
H. Safikhani A. Nourbakhsh A. Khalkhali N. Nariman-Zadeh

Modeling and multi-objective optimization of centrifugal pumps is performed at three steps. At the first step, η and NPSHr in a set of centrifugal pump are numerically investigated using commercial software NUMECA. Two metamodels based on the evolved group method of data handling (GMDH) type neural networks are obtained, at the second step, for modeling of η and NPSHr with respect to geometrica...

2015
James Brofos Rui Shu Matthew Jin

Bayesian optimization has emerged as a powerful, new technique for interpolating and optimizing a wide range of functions which are expensive to compute. The primary tool of Bayesian optimization is the Gaussian process, which permits one to define a prior belief, which is then transformed into a posterior through sequential sampling of points. Unfortunately, Gaussian process interpolation suff...

Journal: :IEEE transactions on neural networks 1997
Hyun Myung Jong-Hwan Kim

In this paper, a time-varying two-phase (TVTP) optimization neural network is proposed based on the two-phase neural network and the time-varying programming neural network. The proposed TVTP algorithm gives exact feasible solutions with a finite penalty parameter when the problem is a constrained time-varying optimization. It can be applied to system identification and control where it has som...

2004
Yuehui Chen Jiwen Dong Yong Zhang

The paper presents a local linear wavelet neural network. The difference of the network with the original wavelet neural network is that the connection weights between the hidden layer and output layer of the original WNN are replaced by a local linear model. A simple and fast training algorithm, particle swarm optimization (PSO), is also introduced for training the local linear wavelet neural ...

2004
David. Suter Xiaolan Deng

2 Neural Nets, Statistical Physics and Optimization It has recently been shown that neural net models can be based upon the principles of statistical physics, In particular the spin sociative memories and optimization networks. The simulation of such neural networks promises to provide valuable insights into human and machine vision. However, for a large amay, the calculations involved become p...

Journal: :IEEE transactions on neural networks 1998
Youshen Xia Jun Wang

In this paper, we present a general methodology for designing optimization neural networks. We prove that the neural networks constructed by using the proposed method are guaranteed to be globally convergent to solutions of problems with bounded or unbounded solution sets, in contrast with the gradient methods whose convergence is not guaranteed. We show that the proposed method contains both t...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید