نتایج جستجو برای: fuzzy approximators
تعداد نتایج: 90193 فیلتر نتایج به سال:
Approximate computing recognizes that many applications can tolerate inexactness. These applications, which range from multimedia processing to machine learning, operate on inherently noisy and imprecise data. As a result, we can tradeoff some loss in output value integrity for improved processor performance and energy-efficiency. In this paper, we introduce load value approximation. In modern ...
This paper explores the problem of finding a real–time optimal trajectory for unmanned air vehicles (UAV) in order to minimize their probability of detection by opponent multiple radar detection systems. The problem is handled using the Nonlinear Trajectory Generation (NTG) method developed by Milam et al. The paper presents a formulation of the trajectory generation task as an optimal control ...
According to conventional neural network theories, the feature of single-hidden-layer feedforward neural networks(SLFNs) resorts to parameters of the weighted connections and hidden nodes. SLFNs are universal approximators when at least the parameters of the networks including hidden-node parameter and output weight are exist. Unlike above neural network theories, this paper indicates that in o...
In this paper we discuss the potential of using artificial neural networks as smooth priors in classical methods for inverse problems PDEs. Exploring that are global and function approximators, idea is could act attractive coefficients to be estimated from noisy data. We illustrate capabilities context Poisson equation show network approach robustness with respect noisy, incomplete data mesh ge...
Neural networks and the Kriging method are compared for constructing fitness approximation models in evolutionary optimization algorithms. The two models are applied in an identical framework to the optimization of a number of well known test functions. In addition, two different ways of training the approximators are evaluated: In one setting the models are built off-line using data from previ...
This paper proposes a new method to reduce training time for neural nets used as function approximators. This method relies on a geometrical control of Multilayer Perceptrons (MLP). A geometrical initialization gives first better starting points for the learning process. A geometrical parametrization achieves then a more stable convergence. During the learning process, a dynamic geometrical con...
Monotonicity is a constraint which arises in many application domains. We present a machine learning model, the monotonic network, for which monotonicity can be enforced exactly, i.e., by virtue offunctional form . A straightforward method for implementing and training a monotonic network is described. Monotonic networks are proven to be universal approximators of continuous, differentiable mon...
This paper proposes a least-squares temporal difference (LSTD) algorithm based on extreme learning machine that uses a singlehidden layer feedforward network to approximate the value function. While LSTD is typically combined with local function approximators, the proposed approach uses a global approximator that allows better scalability properties. The results of the experiments carried out o...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید