نتایج جستجو برای: gradient algorithm

تعداد نتایج: 859818  

2010
Gilles Louppe Pierre Geurts

Parallel and distributed algorithms have become a necessity in modern machine learning tasks. In this work, we focus on parallel asynchronous gradient descent [1, 2, 3] and propose a zealous variant that minimizes the idle time of processors to achieve a substantial speedup. We then experimentally study this algorithm in the context of training a restricted Boltzmann machine on a large collabor...

1999
X. Zhao P. B. Luh J. Wang D. D. Yao Yu-Chi Ho

The subgradient method is used frequently to optimize dual functions in Lagrangian relaxation for separable integer programming problems. In the method, all subproblems must be solved optimally to obtain a subgradient direction. In this paper, the surrogate subgradient method is developed, where a proper direction can be obtained without solving optimally all the subproblems. In fact, only an a...

2012
Jean-Antoine Désidéri

The steepest-descent method is a well-known and effective single-objective descent algorithm when the gradient of the objective function is known. Here, we propose a particular generalization of this method to multi-objective optimization by considering the concurrent minimization of n smooth criteria {J i } (i = 1,. .. , n). The novel algorithm is based on the following observation: consider a...

2006
Feng Jiao Jinbo Xu Libo Yu Dale Schuurmans

Protein structure prediction is one of the most important and difficult problems in computational molecular biology. Protein threading represents one of the most promising techniques for this problem. One of the critical steps in protein threading, called fold recognition, is to choose the best-fit template for the query protein with the structure to be predicted. The standard method for templa...

Journal: :Journal of Machine Learning Research 2016
Aryan Mokhtari Alejandro Ribeiro

This paper considers convex optimization problems where nodes of a network have access to summands of a global objective. Each of these local objectives is further assumed to be an average of a finite set of functions. The motivation for this setup is to solve large scale machine learning problems where elements of the training set are distributed to multiple computational elements. The decentr...

Journal: :CoRR 2017
Saman Cyrus Bin Hu Bryan Van Scoy Laurent Lessard

This work proposes an accelerated first-order algorithm we call the Robust Momentum Method for optimizing smooth strongly convex functions. The algorithm has a single scalar parameter that can be tuned to trade off robustness to gradient noise versus worst-case convergence rate. At one extreme, the algorithm is faster than Nesterov’s Fast Gradient Method by a constant factor but more fragile to...

Recently, we have demonstrated a new and efficient method to simultaneously reconstruct two unknown interfering wavefronts. A three-dimensional interference pattern was analyzed and then Zernike polynomials and the stochastic parallel gradient descent algorithm were used to expand and calculate wavefronts. In this paper, as one of the applications of this method, the reflected wavefronts from t...

Journal: :J. Optimization Theory and Applications 2011
Hong-Kun Xu

It is well known that the gradient-projection algorithm (GPA) plays an important role in solving constrained convex minimization problems. In this article, we first provide an alternative averaged mapping approach to the GPA. This approach is operator-oriented in nature. Since, in general, in infinite-dimensional Hilbert spaces, GPA has only weak convergence, we provide two modifications of GPA...

2008
Ilya O. Ryzhov Warren Powell Peter I. Frazier

We derive a one-period look-ahead policy for finiteand infinite-horizon online optimal learning problems with Gaussian rewards. The resulting decision rule easily extends to a variety of settings, including the case where our prior beliefs about the rewards are correlated. Experiments show that the KG policy performs competitively against other learning policies in diverse situations. In the ca...

Journal: :CoRR 2018
Lam M. Nguyen Nam H. Nguyen Dzung T. Phan Jayant Kalagnanam Katya Scheinberg

In this paper, we consider a general stochastic optimization problem which is often at the core of supervised learning, such as deep learning and linear classification. We consider a standard stochastic gradient descent (SGD) method with a fixed, large step size and propose a novel assumption on the objective function, under which this method has the improved convergence rates (to a neighborhoo...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید