نتایج جستجو برای: sufficient descent condition
تعداد نتایج: 490700 فیلتر نتایج به سال:
Abstract We consider a homogeneous differential operator $${\mathcal {A}}$$ A and show an improved version of Murat’s condition for -quasiaffine functions, provided that the satisfies constant rank condition. As consequence, we obtain affinity along characteristic cone implies -quasiaffinity if admits first o...
Surrogate functions are often employed to reduce the number of objective function evaluations in a continuous optimization. However, their effects have seldom been investigated theoretically. This paper analyzes effect surrogate information-geometric optimization (IGO) framework, which includes as an algorithm instance variant covariance matrix adaptation evolution strategy—a widely used solver...
In this paper, a new three-term conjugate gradient algorithm is proposed to solve unconstrained optimization including regression problems. We minimize the distance between search direction matrix and self-scaling memoryless BFGS in Frobenius norm determine direction, which has same advantages as quasi-Newton method. At time, random parameter used so that satisfies sufficient descent condition....
There have been some conjugate gradient methods with strong convergence but numerical instability and conversely. Improving these is an interesting idea to produce new both and numerical stability. In this paper, a hybrid method introduced based on the Fletcher formula (CD) Liu Storey formula (LS) good results. New directions satisfy sufficient descent property, independent of line...
in this paper we obtain a necessary and a sufficient condition for the set of ω_0-nearest points ( ω_0-farthest points) to be non-empty or a singleton set in normed linear spaces. we shall find a necessary and a sufficient condition for an uniquely remotal set to be a singleton set.
Abstract Stochastic Gradient Descent (SGD) is widely used in machine learning problems to efficiently perform empirical risk minimization, yet, in practice, SGD is known to stall before reaching the actual minimizer of the empirical risk. SGD stalling has often been attributed to its sensitivity to the conditioning of the problem; however, as we demonstrate, SGD will stall even when applied to ...
We introduce a generic scheme for accelerating first-order optimization methods in the sense of Nesterov, which builds upon a new analysis of the accelerated proximal point algorithm. Our approach consists of minimizing a convex objective by approximately solving a sequence of well-chosen auxiliary problems, leading to faster convergence. This strategy applies to a large class of algorithms, in...
The conjugate gradient is a useful tool in solving large- and small-scale unconstrained optimization problems. In addition, the method can be applied many fields, such as engineering, medical research, computer science. this paper, convex combination of two different search directions proposed. new satisfies sufficient descent condition convergence analysis. Moreover, formula properties with pr...
Stability is a general notion that quantifies the sensitivity of a learning algorithm’s output to small change in the training dataset (e.g. deletion or replacement of a single training sample). Such conditions have recently been shown to be more powerful to characterize learnability in the general learning setting under i.i.d. samples where uniform convergence is not necessary for learnability...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید