نتایج جستجو برای: feed forward neural networks
تعداد نتایج: 795219 فیلتر نتایج به سال:
A functional equivalence of feed-forward networks has been proposed to reduce the search space of learning algorithms. A novel genetic learning algorithm for RBF networks and perceptrons with one hidden layer that makes use of this theoretical property is proposed. Experimental results show that our procedure outperforms the standard genetic learning. Key-Words: Feedforward neural networks, gen...
Recent studies have shown the classification and prediction power of the Neural Networks. It has been demonstrated that a NN can approximate any continuous function. Neural networks have been successfully used for forecasting of financial data series. The classical methods used for time series prediction like Box-Jenkins or ARIMA assumes that there is a linear relationship between inputs and ou...
Deep feed-forward convolutional neural networks (CNNs) have become ubiquitous in virtually all machine learning and computer vision challenges; however, advancements in CNNs have arguably reached an engineering saturation point where incremental novelty results in minor performance gains. Although there is evidence that object classification has reached human levels on narrowly defined tasks, f...
predicting corporate bankruptcy using artificial neural networks (ann) in tehran stock exchange (tse
the main purpose of this paper is prediction of tse corporate financial bankruptcy using artificial neural networks. the mean values of key ratios reported in past bankruptcy studies were selected for neural network inputs (working capital to total assets, net income to total assets, total debt to total assets, current assets to current liabilities, quick assets to current liabilities). the neu...
CODEQ is a new, population-based meta-heuristic algorithm that is a hybrid of concepts from chaotic search, opposition-based learning, differential evolution and quantum mechanics. CODEQ has successfully been used to solve different types of problems (e.g. constrained, integer-programming, engineering) with excellent results. In this paper, CODEQ is used to train feed-forward neural networks. T...
We perform a stationary state replica analysis for a layered network of Ising spin neurons, with recurrent Hebbian interactions within each layer, in combination with strictly feedforward Hebbian interactions between successive layers. This model interpolates between the fully recurrent and symmetric attractor network studied by Amit el al, and the strictly feed-forward attractor network studie...
The view of artificial neural networks as adaptive systems has lead to the development of ad-hoc generic procedures known as learning rules. The first of these is the Perceptron Rule (Rosenblatt, 1962), useful for single layer feed-forward networks and linearly separable problems. Its simplicity and beauty, and the existence of a convergence theorem made it a basic departure point in neural lea...
In this paper, we adapt the classical learning algorithm for feed-forward neural networks when monotonicity is require in the input-output mapping. Such requirements arise, for instance, when prior knowledge of the process being observed is available. Monotonicity can be imposed by the addition of suitable penalization terms to the error function. The objective function, however, depends nonlin...
Figure 2: Scatter plot of testing results vs. training results for 32-24-10 networks, late stopping. Open circles: = 0; lled circles: = 10 03 .
Supervised Artificial Neural Networks (ANN) are information processing systems that adapt their functionality as a result of exposure to input-output examples. To this end, there exist generic procedures and techniques, known as learning rules. The most widely used in the neural network context rely in derivative information, and are typically associated with the Multilayer Perceptron (MLP). Ot...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید