نتایج جستجو برای: gradient descent algorithm

تعداد نتایج: 869527  

Journal: :CoRR 2010
Ankan Saha Ambuj Tewari

Cyclic coordinate descent is a classic optimization method that has witnessed a resurgence of interest in machine learning. Reasons for this include its simplicity, speed and stability, as well as its competitive performance on l1 regularized smooth optimization problems. Surprisingly, very little is known about its finite time convergence behavior on these problems. Most existing results eithe...

2016

1 Notion We introduce some notions used in this supplementary material. For regression task, we define y max = max y |y|. We further denote the set S as S = B 0, y max λ −1/2 if L2 is used and λ ≤1 R D otherwise where B 0, y max λ −1/2 = w ∈ R D : w ≤ y max λ −1/2 and R D specifies the whole feature space. We introduce five types of loss functions that can be used in our proposed algorithm, nam...

2006
GUANGMING ZHOU YUNQING HUANG CHUNSHENG FENG C. S. FENG

In this paper, a hybrid conjugate gradient algorithm with weighted preconditioner is proposed. The algorithm can efficiently solve the minimizing problem of general function deriving from finite element discretization of the p-Laplacian. The algorithm is efficient, and its convergence rate is meshindependent. Numerical experiments show that the hybrid conjugate gradient direction of the algorit...

2017
Xiao Zhang Lingxiao Wang Quanquan Gu

We study the problem of estimating low-rank matrices from linear measurements (a.k.a., matrix sensing) through nonconvex optimization. We propose an efficient stochastic variance reduced gradient descent algorithm to solve a nonconvex optimization problem of matrix sensing. Our algorithm is applicable to both noisy and noiseless settings. In the case with noisy observations, we prove that our a...

2005
Chengzhang Wang Baocai Yin Qin Shi Yanfeng Sun

A novel model matching method based on improved genetic algorithm is presented in this paper to improve efficiency of matching process for 3D face synthesis. New method is independent from initial values and more robust than stochastic gradient descent method. Improved genetic algorithm has strong global searching ability. Crossover and mutation probability are regulated during optimization pro...

Journal: :Journal of Machine Learning Research 2017
H. Brendan McMahan

We present tools for the analysis of Follow-The-Regularized-Leader (FTRL), Dual Averaging, and Mirror Descent algorithms when the regularizer (equivalently, proxfunction or learning rate schedule) is chosen adaptively based on the data. Adaptivity can be used to prove regret bounds that hold on every round, and also allows for data-dependent regret bounds as in AdaGrad-style algorithms (e.g., O...

1996
Doina Precup Rich Sutton

This report describes a series of results using the exponentiated gradient descent (EG) method recently proposed by Kivinen and Warmuth. Prior work is extended by comparing speed of learning on a nonstationary problem and on an extension to backpropagation networks. Most signi cantly, we present an extension of the EG method to temporal-di erence and reinforcement learning. This extension is co...

2016
Chi Jin Sham M. Kakade Praneeth Netrapalli

Matrix completion, where we wish to recover a low rank matrix by observing a few entries from it, is a widely studied problem in both theory and practice with wide applications. Most of the provable algorithms so far on this problem have been restricted to the offline setting where they provide an estimate of the unknown matrix using all observations simultaneously. However, in many application...

2004
Jiang Minghu Zhu

This paper presents the hybrid algorithm of global optimization of dynamic learning rate for multilayer feedforward neural networks (MLFNN). The effect of inexact line search on conjugacy was studied and a generalized conjugate gradient method based on this effect was proposed and shown to have global convergence for error backpagation of MLFNN. The descent property and global convergence was g...

2005
SAMIR SIKSEK

We present an algorithm for computing an upper bound for the difference of the logarithmic height and the canonical height on elliptic curves. Moreover a new method for performing the infinite descent on elliptic curves is given, using ideas from the geometry of numbers. These algorithms are practical and are demonstrated by a few examples.

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید