نتایج جستجو برای: variable stepsize implementation
تعداد نتایج: 612759 فیلتر نتایج به سال:
In this article, block BS methods are considered for the numerical solution of Volterra integro-differential equations (VIDEs). Convergence and stability properties are analyzed. A new Matlab code for the solution of VIDEs, called VIDEBS, is presented. Numerical results using a variable stepsize implementation show the effectiveness of the proposed code. 2014 Elsevier Inc. All rights reserved.
There is considerable evidence suggesting that for Hamiltonian systems of ordinary differential equations it is better to use numerical integrators that preserve the symplectic property of the ow of the system, at least for long-time integrations. We present what we believe is a practical way of doing symplectic integration with variable stepsize. Another idea, orthogonal to variable stepsize, ...
The stability of variable stepsize LMS (VSLMS) algorithms with uncorrelated stationary Gaussian data is studied. It is found that when the stepsize is determined by the past data, the boundedness of the stepsize by the usual stability condition of xed stepsize LMS is su cient for the stability of VSLMS. When the stepsize is also related to the current data, the above constraint is no longer su ...
Second derivative general linear methods (SGLMs) have been already implemented in a variable stepsize environment using Nordsieck technique. In this paper, we introduce SGLMs directly on nonuniform grid. By deriving the order conditions of proposed $p$ and stage $q=p$, some explicit examples these up to four are given. numerical experiments, show efficiency solving nonstiff problems confirm the...
This paper considers the development of a two-point predictor-corrector block method for solving delay differential equations. The formulae are represented in divided difference form and the algorithm is implemented in variable stepsize variable order technique. The block method produces two new values at a single integration step. Numerical results are compared with existing methods and it is ...
Motivated by machine learning problems over large data sets and distributed optimization over networks, we develop and analyze a new method called incremental Newton method for minimizing the sum of a large number of strongly convex functions. We show that our method is globally convergent for a variable stepsize rule. We further show that under a gradient growth condition, convergence rate is ...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید