Efficient algorithms for online convex optimization and their applications
نویسنده
چکیده
In this thesis we study algorithms for online convex optimization and their relation to approximate optimization. In the first part, we propose a new algorithm for a general online optimization framework called online convex optimization. Whereas previous efficient algorithms are mostly gradient-descent based, the new algorithm is inspired by the Newton-Raphson method for convex optimization, and hence called Online Newton Step. We prove that in certain scenarios Online Newton Step guarantees logarithmic regret, as opposed to polynomial bounds achieved by previous algorithms. The analysis is based on new insights concerning the natural “follow-the-leader” (FTL) method for online optimization, and answers some open problems regarding FTL. One application is for the portfolio management problem, for which we describe experimental results over real market data. In the second part of the thesis, we describe a general scheme of utilizing online game playing algorithms to obtain efficient algorithms for offline optimization. Using new and old online convex optimization algorithms we show how to derive the following: 1. Approximation algorithms for convex programming with linear dependence on the approximation guarantee. 2. Efficient algorithms for haplotype frequency estimation. 3. Fast algorithms for approximate semidefinite programming
منابع مشابه
Projection-free Online Learning
The computational bottleneck in applying online learning to massive data sets is usually the projection step. We present efficient online learning algorithms that eschew projections in favor of much more efficient linear optimization steps using the Frank-Wolfe technique. We obtain a range of regret bounds for online convex optimization, with better bounds for specific cases such as stochastic ...
متن کاملA Linearly Convergent Conditional Gradient Algorithm with Applications to Online and Stochastic Optimization
Linear optimization is many times algorithmically simpler than non-linear convex optimization. Linear optimization over matroid polytopes, matching polytopes and path polytopes are example of problems for which we have simple and efficient combinatorial algorithms, but whose non-linear convex counterpart is harder and admit significantly less efficient algorithms. This motivates the computation...
متن کاملPrimal-dual path-following algorithms for circular programming
Circular programming problems are a new class of convex optimization problems that include second-order cone programming problems as a special case. Alizadeh and Goldfarb [Math. Program. Ser. A 95 (2003) 3-51] introduced primal-dual path-following algorithms for solving second-order cone programming problems. In this paper, we generalize their work by using the machinery of Euclidean Jordan alg...
متن کاملLow latency IIR digital filter design by using metaheuristic optimization algorithms
Filters are particularly important class of LTI systems. Digital filters have great impact on modern signal processing due to their programmability, reusability, and capacity to reduce noise to a satisfactory level. From the past few decades, IIR digital filter design is an important research field. Design of an IIR digital filter with desired specifications leads to a no convex optimization pr...
متن کاملAn efficient one-layer recurrent neural network for solving a class of nonsmooth optimization problems
Constrained optimization problems have a wide range of applications in science, economics, and engineering. In this paper, a neural network model is proposed to solve a class of nonsmooth constrained optimization problems with a nonsmooth convex objective function subject to nonlinear inequality and affine equality constraints. It is a one-layer non-penalty recurrent neural network based on the...
متن کامل