Combining evolutionary and stochastic gradient techniques for system identification
نویسندگان
چکیده
منابع مشابه
Stochastic System Identification by Evolutionary Algorithms
For system identification, the ordinary differential equations (ODEs) model is popular for its accuracy and effectiveness. Consequently, the ODEs model is extended to the stochastic differential equations (SDEs) model to tackle the stochastic case intuitively. But the existence of stochastic integral is a rigid barrier. We simply transform the SDEs to their corresponding stochastic difference e...
متن کاملCombining Trust Region Techniques and Rosenbrock Methods for Gradient Systems
Rosenbrock methods are popular for solving stiff initial value problems for ordinary differential equations. One advantage is that there is no need to solve a nonlinear equation at every iteration, as compared with other implicit methods such as backward difference formulas and implicit Runge-Kutta methods. In this paper, we introduce some trust region techniques to control the time step in the...
متن کاملOptimal Quantization: Evolutionary Algorithm vs Stochastic Gradient
We propose a new method based on evolutionary optimization for obtaining an optimal L-quantizer of a multidimensional random variable. First, we remind briefly the main results about quantization. Then, we present the classical gradient-based approach (this approach is well detailed in [2] and [7] for p=2) used up to now to find a “local” optimal L-quantizer. Then, we give an algorithm that per...
متن کاملIdentification of Multiple Input-multiple Output Non-linear System Cement Rotary Kiln using Stochastic Gradient-based Rough-neural Network
Because of the existing interactions among the variables of a multiple input-multiple output (MIMO) nonlinear system, its identification is a difficult task, particularly in the presence of uncertainties. Cement rotary kiln (CRK) is a MIMO nonlinear system in the cement factory with a complicated mechanism and uncertain disturbances. The identification of CRK is very important for different pur...
متن کاملStochastic Proximal Gradient Descent with Acceleration Techniques
Proximal gradient descent (PGD) and stochastic proximal gradient descent (SPGD) are popular methods for solving regularized risk minimization problems in machine learning and statistics. In this paper, we propose and analyze an accelerated variant of these methods in the mini-batch setting. This method incorporates two acceleration techniques: one is Nesterov’s acceleration method, and the othe...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Journal of Computational and Applied Mathematics
سال: 2009
ISSN: 0377-0427
DOI: 10.1016/j.cam.2008.07.014