نتایج جستجو برای: frank and wolfe method
تعداد نتایج: 17046428 فیلتر نتایج به سال:
Motivated principally by the low-rank matrix completion problem, we present an extension of the Frank–Wolfe method that is designed to induce near-optimal solutions on lowdimensional faces of the feasible region. This is accomplished by a new approach to generating “in-face” directions at each iteration, as well as through new choice rules for selecting between inface and “regular” Frank–Wolfe ...
Motivated principally by the low-rank matrix completion problem, we present an extension of the Frank-Wolfe Method that is designed to induce near-optimal solutions on low-dimensional faces of the feasible region. This is accomplished by a new approach to generating “in-face” directions at each iteration, as well as through new choice rules for selecting between in-face and “regular” Frank-Wolf...
We develop a new Newton Frank–Wolfe algorithm to solve class of constrained self-concordant minimization problems using linear oracles (LMO). Unlike L-smooth convex functions, where the Lipschitz continuity objective gradient holds globally, functions only has local bounds, making it difficult estimate number oracle (LMO) calls for underlying optimization algorithm. Fortunately, we can still pr...
We propose a randomized block-coordinate variant of the classic Frank-Wolfe algorithm for convex optimization with block-separable constraints. Despite its lower iteration cost, we show that it achieves the same convergence rate as the full Frank-Wolfe algorithm. We also show that, when applied to the dual structural support vector machine (SVM) objective, this algorithm has the same low iterat...
Finding maximum a posterior (MAP) estimation is common problem in computer vision, such as the inference in Markov random fields. However, it is in general intractable, and one has to resort to approximate solutions, e.g. quadratic programming. In this paper, we propose a robust Frank-Wolfe method [6] to do the MAP inference. Our algorithm optimizes the quadratic programming problem by alternat...
We analyze two novel randomized variants of the Frank-Wolfe (FW) or conditional gradient algorithm. While classical FW algorithms require solving a linear minimization problem over the domain at each iteration, the proposedmethod only requires to solve a linear minimization problem over a small subset of the original domain. The first algorithm that we propose is a randomized variant of the ori...
A new iterative algorithm for the solution of minimization problems in infinitedimensional Hilbert spaces which involve sparsity constraints in form of `p-penalties is proposed. In contrast to the well-known algorithm considered by Daubechies, Defrise and De Mol, it uses hard instead of soft shrinkage. It is shown that the hard shrinkage algorithm is a special case of the generalized conditiona...
The celebrated minimax theorem for zero-sum games, first discovered by John von Neumann in the 1920s [14, 10], is certainly a foundational result in the theory of games. It states that two players, playing a game with zero-sum payoffs, each have an optimal randomized strategy that can be played obliviously – that is, even announcing their strategy in advance to an optimal opponent would not dam...
We consider the Frank-Wolfe (FW) method for constrained convex optimization, and we show that this classical technique can be interpreted from a different perspective: FW emerges as the computation of an equilibrium (saddle point) of a special convex-concave zero sum game. This saddle-point trick relies on the existence of no-regret online learning to both generate a sequence of iterates but al...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید