نتایج جستجو برای: variable stepsize

تعداد نتایج: 259826  

2003
Roman Kozlov Anne Kværnø Brynjulf Owren

Splitting methods are frequently used in solving stiff differential equations, it is common to split the system of equations into a stiff and a nonstiff part. The classical theory for the local order of consistency is valid only for stepsizes which are smaller than what one would typically prefer to use in the integration. Error control and stepsize selection devices based on classical local or...

2004
Lijun Qian Xiangfang Li Zoran Gajic

In this paper, a new adaptive step-size discrete power control scheme is proposed, based on the review of practical power control schemes implemented in the IS-95, CDMA2000 and WCDMA wireless systems, and the power control algorithms proposed in literature. The transmit power is controlled in discrete power domain while using the idea of the most popular power control algorithm (e.g. DCPC) in p...

Journal: :SIAM Journal on Optimization 2016
Yunlong He Renato D. C. Monteiro

This article proposes a new algorithm for solving a class of composite convex-concave saddlepoint problems. The new algorithm is a special instance of the hybrid proximal extragradient framework in which a Nesterov’s accelerated variant is used to approximately solve the prox subproblems. One of the advantages of the new method is that it works for any constant choice of proximal stepsize. More...

Journal: :CoRR 2016
Soham De Abhay Kumar Yadav David W. Jacobs Tom Goldstein

Classical stochastic gradient methods for optimization rely on noisy gradient approximations that become progressively less accurate as iterates approach a solution. The large noise and small signal in the resulting gradients makes it difficult to use them for adaptive stepsize selection and automatic stopping. We propose alternative “big batch” SGD schemes that adaptively grow the batch size o...

Journal: :MCSS 1989
William A. Sethares Brian D. O. Anderson C. Richard Johnson

This paper presents a unified framework for the analysis of several discrete time adaptive parameter estimation algorithms, including RML with nonvanishing stepsize, several ARMAX identifiers, the Landau-style output error algorithms, and certain others for which no stability proof has yet appeared. A general algorithmic form is defined, incorporating a linear time-varying regressor '. filter a...

Journal: :Optimization Methods and Software 2017
O. Kolossoski Renato D. C. Monteiro

This paper describes an accelerated HPE-type method based on general Bregman distances for solving monotone saddle-point (SP) problems. The algorithm is a special instance of a non-Euclidean hybrid proximal extragradient framework introduced by Svaiter and Solodov [28] where the prox sub-inclusions are solved using an accelerated gradient method. It generalizes the accelerated HPE algorithm pre...

2017
Lili Pan Shenglong Zhou Naihua Xiu Houduo Qi

The iterative hard thresholding (IHT) algorithm is a popular greedy-type method in (linear and nonlinear) compressed sensing and sparse optimization problems. In this paper, we give an improved iterative hard thresholding algorithm for solving the nonnegative sparsity optimization (NSO) by employing the Armijo-type stepsize rule, which automatically adjusts the stepsize and support set and lead...

Journal: :Computer Physics Communications 2004
Jörg Wensch Markus Däne Wolfram Hergert Arthur Ernst

In solid state physics the solution of the Dirac and Schrödinger equation by operator splitting methods leads to differential equations with oscillating solutions for the radial direction. For standard time integrators like Runge–Kutta or multistep methods the stepsize is restricted approximately by the length of the period. In contrast the recently developed Magnus methods allow stepsizes that...

1999
Justin A. Boyan

TD( ) is a popular family of algorithms for approximate policy evaluation in large MDPs. TD( ) works by incrementally updating the value function after each observed transition. It has two major drawbacks: it makes ine cient use of data, and it requires the user to manually tune a stepsize schedule for good performance. For the case of linear value function approximations and = 0, the Least-Squ...

2015
Tom Goldstein Min Li Xiaoming Yuan

The alternating direction method of multipliers (ADMM) is an important tool for solving complex optimization problems, but it involves minimization sub-steps that are often difficult to solve efficiently. The Primal-Dual Hybrid Gradient (PDHG) method is a powerful alternative that often has simpler sub-steps than ADMM, thus producing lower complexity solvers. Despite the flexibility of this met...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید