نتایج جستجو برای: sufficient descent directions

تعداد نتایج: 286567  

2013
Yatao Bian Xiong Li Mingqi Cao Yuncai Liu

Parallel coordinate descent algorithms emerge with the growing demand of large-scale optimization. In general, previous algorithms are usually limited by their divergence under high degree of parallelism (DOP), or need data pre-process to avoid divergence. To better exploit parallelism, we propose a coordinate descent based parallel algorithm without needing of data pre-process, termed as Bundl...

2002
Rongxing Li Fei Ma Fengliang Xu Larry H. Matthies Clark F. Olson Raymond E. Arvidson

[1] The planned 2003 Mars Exploration Rover (MER) Mission and follow-on surface activities associated with landed missions will focus on long distance roving and sample return, which require detailed knowledge of vehicle locations in both local and global reference systems. In this paper we argue that this rover localization should be known to within 0.1% of the distance traversed for local coo...

Journal: :Computational Optimization and Applications 2021

This work presents the convergence rate analysis of stochastic variants broad class direct-search methods directional type. It introduces an algorithm designed to optimize differentiable objective functions f whose values can only be computed through a stochastically noisy blackbox. The proposed (SDDS) accepts new iterates by imposing sufficient decrease condition on so called probabilistic est...

Journal: :Automatica 1968
Bernard Pagurek C. Murray Woodside

Absfract-This paper extends the conjugate gradient minimization method of Fletcher and Reeves to optimal control problems. The technique is directly applicable only to unconstrained problems; if terminal conditions and inequality constraints are present, the problem must be converted to an unconstrained form; e.g., by penalty functions. Only the gradient trajectory, its norm, and one additional...

Journal: :Journal of Mathematical Physics 2022

We derive one- and two-dimensional models for classical electromagnetism by making use of Hadamard’s method descent. Low-dimensional is conceived as a specialization the higher-dimensional one, in which fields are uniform along additional spatial directions. This strategy yields two independent electromagnetisms coordinates four one coordinate.

We find a criterion for a morphism of coalgebras over a Barr-exact category to be effective descent and determine (effective) descent morphisms for coalgebras over toposes in some cases. Also, we study some exactness properties of endofunctors of arbitrary categories in connection with natural transformations between them as well as those of functors that these transformations induce between co...

2016
Elad Richardson Rom Herskovitz Boris Ginsburg Michael Zibulevsky

We present SEBOOST, a technique for boosting the performance of existing stochastic optimization methods. SEBOOST applies a secondary optimization process in the subspace spanned by the last steps and descent directions. The method was inspired by the SESOP optimization method, and has been adapted for the stochastic learning. It can be applied on top of any existing optimization method with no...

2013
Dmitry Savostyanov Sergey V. Dolgov Dmitry V. Savostyanov

We introduce a family of numerical algorithms for the solution of linear system in higher dimensions with the matrix and right hand side given and the solution sought in the tensor train format. The proposed methods are rank–adaptive and follow the alternating directions framework, but in contrast to ALS methods, in each iteration a tensor subspace is enlarged by a set of vectors chosen similar...

2013
Yangyang Shi Mei-Yuh Hwang Kaisheng Yao Martha Larson

Recurrent neural network based language models (RNNLM) have been demonstrated to outperform traditional n-gram language models in automatic speech recognition. However, the superior performance is obtained at the cost of expensive model training. In this paper, we propose a sentence-independent subsampling stochastic gradient descent algorithm (SIS-SGD) to speed up the training of RNNLM using p...

2016
Daniel Jiwoong Im Michael Tao Kristin Branson

The success of deep neural networks hinges on our ability to accurately and efficiently optimize high-dimensional, non-convex functions. In this paper, we empirically investigate the loss functions of state-of-the-art networks, and how commonlyused stochastic gradient descent variants optimize these loss functions. To do this, we visualize the loss function by projecting them down to low-dimens...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید