نتایج جستجو برای: descent method

تعداد نتایج: 1645212  

2012
Shuang Wu Jun Sakuma

The traditional paradigm in machine learning has been that given a data set, the goal is to learn a target function or decision model (such as a classifier) from it. Many techniques in data mining and machine learning follow a gradient descent paradigm in the iterative process of discovering this target function or decision model. For instance, Linear regression can be resolved through a gradie...

2008
R. Fletcher M. J. D. Powell

We are concerned in this paper with the general problem of finding an unrestricted local minimum of a function J[xu x2 • • •, xn) of several variables xx, x2, • •., xn. We suppose that the function of interest can be calculated at all points. It is convenient to group functions into two main classes according to whether the gradient vector g, = Wbx, is defined analytically at each point or must...

1998
Ofer Melnik

| It has been demonstrated that higher order recurrent neu-ral networks exhibit an underlying fractal attractor as an artifact of their dynamics. These fractal attractors ooer a very eecent mechanism to encode visual memories in a neu-ral substrate, since even a simple twelve weight network can encode a very large set of diierent images. The main problem in this memory model, which so far has r...

2017
Shixiang Chen Shiqian Ma Wei Liu

In this paper, we extend the geometric descent method recently proposed by Bubeck, Lee and Singh [5] to solving nonsmooth and strongly convex composite problems. We prove that the resulting algorithm, GeoPG, converges with a linear rate (1− 1/√κ), thus achieves the optimal rate among first-order methods, where κ is the condition number of the problem. Numerical results on linear regression and ...

Journal: :Journal of studies in science and engineering 2021

The Steepest descent method and the Conjugate gradient to minimize nonlinear functions have been studied in this work. Algorithms are presented implemented Matlab software for both methods. However, a comparison has made between method. obtained results time efficiency aspects. It is shown that needs fewer iterations more than On other hand, converges function less

Journal: :CoRR 2016
Stephen Tu Rebecca Roelofs Shivaram Venkataraman Benjamin Recht

We demonstrate that distributed block coordinate descent can quickly solve kernel regression and classification problems with millions of data points. Armed with this capability, we conduct a thorough comparison between the full kernel, the Nyström method, and random features on three large classification tasks from various domains. Our results suggest that the Nyström method generally achieves...

With respect to importance of the conjugate gradient methods for large-scale optimization, in this study a descent three-term conjugate gradient method is proposed based on an extended modified secant condition. In the proposed method, objective function values are used in addition to the gradient information. Also, it is established that the method is globally convergent without convexity assu...

Journal: :Mathematical Problems in Engineering 2021

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید