نتایج جستجو برای: fuzzy approximators

تعداد نتایج: 90193  

1992
Vladik Kreinovich Ongard Sirisaengtaksin Sergio Cabrera

| Neural networks are universal approximators. For example, it has been proved (Hornik et al) that for every " > 0, an arbitrary continuous function on a compact set can be "?approximated by a 3-layer neural network. This and other results prove that in principle, any function (e.g., any control) can be implemented by an appropriate neural network. But why neural net-works? In addition to neura...

Journal: :مهندسی صنایع 0
مهدی خاشعی دانشگاه صنعتی اصفهان مهدی بیجاری دانشگاه صنعتی اصفهان

artificial neural networks (anns) are flexible computing frameworks and universal approximators that can be applied to a wide range of time series forecasting problems with a high degree of accuracy. however, despite of all advantages cited for artificial neural networks, they have data limitation and need to the large amount of historical data in order to yield accurate results. therefore, the...

1997
F.

Diierent predictors and their approximators in nonlinear prediction regression models are studied. The minimal value of the mean squared error (MSE) is derived. Some approximate formulae for the MSE of ordinary and weighted least squares predictors are given.

2016
Vasile GEORGESCU Vasile Georgescu

Nature-inspired metaheuristics for optimization have proven successful, due to their fine balance between exploration and exploitation of a search space. This balance can be further refined by hybridization. In this paper, we conduct experiments with some of the most promising nature-inspired metaheuristics, for assessing their performance when using them to replace backpropagation as a learnin...

Journal: :Reliable Computing 2012
Coen C. de Visser Erik-Jan van Kampen Qiping Chu J. A. Mulder

In science and engineering there often is a need for the approximation of scattered multi-dimensional data. A class of powerful scattered data approximators are the multivariate simplex B-splines. Multivariate simplex B-splines consist of Bernstein basis polynomials that are defined on a geometrical structure called a triangulation. Multivariate simplex B-splines have a number of advantages ove...

2006
Salvatore D’Angelo Edmondo Minisci

This work concerns the application of multi-objective evolutionary optimization by approximation function to 2D aerodynamic design. The new general concept of evolution control is used to on-line enriching the database of correct solutions, which are the basis of the learning procedure for the approximators. Substantially, given an initial very poor model approximation, which means small size o...

2017
Vivek Veeriah Harm van Seijen Richard S. Sutton

Multi-step methods are important in reinforcement learning (RL). Eligibility traces, the usual way of handling them, works well with linear function approximators. Recently, van Seijen (2016) had introduced a delayed learning approach, without eligibility traces, for handling the multi-step λ-return with nonlinear function approximators. However, this was limited to action-value methods. In thi...

Journal: :MCSS 2002
Mark French Csaba Szepesvári Eric Rogers

We consider the adaptive tracking problem for a chain of integrators, where the uncertainty is static and functional. The uncertainty is specified by L2=Ly or weighted L2=Ly norm bounds. We analyse a standard Lyapunovbased adaptive design which utilises a function approximator to induce a parametric uncertainty, on which the adaptive design is completed. Performance is measured by a modified LQ...

2017
Suraj Srinivas

The Jacobian of a neural network, or the derivative of the output with respect to the input, is a versatile object with many applications. In this paper we discuss methods to use this object efficiently for knowledge transfer. We first show that matching Jacobians is a special form of distillation, where noise is added to the input. We then show experimentally that we can perform better distill...

2004
Marco Wiering

Although tabular reinforcement learning (RL) methods have been proved to converge to an optimal policy, the combination of particular conventional reinforcement learning techniques with function approximators can lead to divergence. In this paper we show why off-policy RL methods combined with linear function approximators can lead to divergence. Furthermore, we analyze two different types of u...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید