SVM via Saddle Point Optimization: New Bounds and Distributed Algorithms
نویسندگان
چکیده
Support Vector Machine is one of the most classical approaches for classification and regression. Despite being studied for decades, obtaining practical algorithms for SVM is still an active research problem in machine learning. In this paper, we propose a new perspective for SVM via saddle point optimization. We provide an algorithm which achieves (1 − )-approximations with running time Õ(nd + n √ d/ ) for both separable (hard margin SVM) and non-separable cases (ν-SVM ), where n is the number of points and d is the dimensionality. To the best of our knowledge, the current best algorithm for hard margin SVM achieved by Gilbert algorithm [16] requires O(nd/ ) time. Our algorithm improves the running time by a factor of √ d/ √ . For ν-SVM, besides the well known quadratic programming approach which requires Ω(nd) time [21, 31], no better algorithm is known. In the paper, we provide the first nearly linear time algorithm for ν-SVM. We also consider the distributed settings and provide distributed algorithms with low communication cost via saddle point optimization. Our algorithms require Õ(k(d + √ d/ )) communication cost where k is the number of clients, almost matching the theoretical lower bound.
منابع مشابه
Communication-Efficient Distributed Primal-Dual Algorithm for Saddle Point Problem
Primal-dual algorithms, which are proposed to solve reformulated convex-concave saddle point problems, have been proven to be effective for solving a generic class of convex optimization problems, especially when the problems are ill-conditioned. However, the saddle point problem still lacks a distributed optimization framework where primal-dual algorithms can be employed. In this paper, we pro...
متن کاملSaddle Point Seeking for Convex Optimization Problems
In this paper, we consider convex optimization problems with constraints. By combining the idea of a Lie bracket approximation for extremum seeking systems and saddle point algorithms, we propose a feedback which steers a single-integrator system to the set of saddle points of the Lagrangian associated to the convex optimization problem. We prove practical uniform asymptotic stability of the se...
متن کاملFast Rates for Empirical Risk Minimization of Strict Saddle Problems
We derive bounds on the sample complexity of empirical risk minimization (ERM) in the context of minimizing non-convex risks that admit the strict saddle property. Recent progress in non-convex optimization has yielded efficient algorithms for minimizing such functions. Our results imply that these efficient algorithms are statistically stable and also generalize well. In particular, we derive ...
متن کاملOptimization Algorithms for Faster Computational Geometry
We study two fundamental problems in computational geometry: finding the maximum inscribed ball (MaxIB) inside a bounded polyhedron defined bym hyperplanes in a d-dimensional space, and finding the minimum enclosing ball (MinEB) of a set of n points in a d-dimensional space. We translate both these geometric problems into optimization problems and apply firstorder methods for smooth and saddle-...
متن کاملOptimal Placement and Sizing of Multiple Renewable Distributed Generation Units Considering Load Variations Via Dragonfly Optimization Algorithm
The progression towards smart grids, integrating renewable energy resources, has increased the integration of distributed generators (DGs) into power distribution networks. However, several economic and technical challenges can result from the unsuitable incorporation of DGs in existing distribution networks. Therefore, optimal placement and sizing of DGs are of paramount importance to improve ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- CoRR
دوره abs/1705.07252 شماره
صفحات -
تاریخ انتشار 2017