نتایج جستجو برای: stochastic optimization
تعداد نتایج: 429961 فیلتر نتایج به سال:
This paper deals with day-ahead programming under uncertainties in microgrids (MGs). A two-stage stochastic programming with the fixed recourse approach was adopted. The studied MG was considered in the grid-connected mode with the capability of power exchange with the upstream network. Uncertain electricity market prices, unpredictable load demand, and uncertain wind and solar power values, du...
The paper studies stochastic optimization (programming) problems with compound functions containing expectations and extreme values of other random functions as arguments. Compound functions arise in various applications. A typical example is a variance function of nonlinear outcomes. Other examples include stochastic minimax problems, econometric models with latent variables, multi-level and m...
Many microeconomic and engineering problems can be formulated as stochastic optimization problems that are modelled by Itô evolution systems and by cost functionals expressed as stochastic integrals. Our paper studies some optimization problems constrained by stochastic evolution systems, giving original results on stochastic first integrals, adjoint stochastic processes and a version of simpli...
in this paper, a stochastic cell formation problem is studied using queuing theory framework and considering reliability. since cell formation problem is np-hard, two algorithms based on genetic and modified particle swarm optimization (mpso) algorithms are developed to solve the problem. for generating initial solutions in these algorithms, a new heuristic method is developed, which always cre...
Stochastic global optimization methods are methods for solving a global optimization problem incorporating probabilistic (stochastic) elements, either in the problem data (the objective function, the constraints, etc.), or in the algorithm itself, or in both. Global optimization is a very important part of applied mathematics and computer science. The importance of global optimization is primar...
Zeroth-order (derivative-free) optimization attracts a lot of attention in machine learning, because explicit gradient calculations may be computationally expensive or infeasible. To handle large scale problems both in volume and dimension, recently asynchronous doubly stochastic zeroth-order algorithms were proposed. The convergence rate of existing asynchronous doubly stochastic zeroth order ...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید