نتایج جستجو برای: relevance vector machines

تعداد نتایج: 370942  

2000
Yuh-Jye Lee

Smoothing methods, extensively used for solving important mathematical programming problems and applications, are proposed here to generate and solve an unconstrained smooth reformulation of support vector machines for pattern classification using completely arbitrary kernels. We term such reformulations smooth support vector machines (SSVMs). A fast Newton-Armijo algorithm for solving the SSVM...

2005
Jürgen Schmidhuber Matteo Gagliolo Daan Wierstra Faustino Gomez

Existing Support Vector Machines (SVMs) need pre-wired finite time windows to predict and classify time series. They do not have an internal state necessary to deal with sequences involving arbitrary long-term dependencies. Here we introduce the first recurrent, truly sequential SVM-like devices with internal adaptive states, trained by a novel method called EVOlution of systems with KErnel-bas...

Journal: :INFORMS Journal on Computing 2010
Emilio Carrizosa Belen Martin-Barragan Dolores Romero Morales

The widely used Support Vector Machine (SVM) method has shown to yield very good results in Supervised Classification problems. Other methods such as Classification Trees have become more popular among practitioners than SVM thanks to their interpretability, which is an important issue in Data Mining. In this work, we propose an SVM-based method that automatically detects the most important pre...

2010
Michinari Momma Kohei Hatano Hiroki Nakayama

This paper proposes the ellipsoidal SVM (e-SVM) that uses an ellipsoid center, in the version space, to approximate the Bayes point. Since SVM approximates it by a sphere center, e-SVM provides an extension to SVM for better approximation of the Bayes point. Although the idea has been mentioned before (Ruján (1997)), no work has been done for formulating and kernelizing the method. Starting fro...

2003
Ji Zhu Saharon Rosset Trevor Hastie Rob Tibshirani

The standard -norm SVM is known for its good performance in twoclass classification. In this paper, we consider the -norm SVM. We argue that the -norm SVM may have some advantage over the standard -norm SVM, especially when there are redundant noise features. We also propose an efficient algorithm that computes the whole solution path of the -norm SVM, hence facilitates adaptive selection of th...

2007
Nando de Freitas Marta Milo Philip Clarkson Mahesan Niranjan Andrew Gee

In this paper, we derive an algorithm to train support vector machines sequentially. The algorithm makes use of the Kalman lter and is optimal in a minimum variance framework. It extends the support vector machine paradigm to applications involving real-time and non-stationary signal processing. It also provides a computationally eecient alternative to the problem of quadratic optimisation.

1999
Yoram Singer

We describe an iterative algorithm for building vector machines used in classification tasks. The algorithm builds on ideas from support vector machines, boosting, and generalized additive models. The algorithm can be used with various continuously differential functions that bound the discrete (0-1) classification loss and is very simple to implement. We test the proposed algorithm with two di...

Journal: :CoRR 2015
Zhixiang Eddie Xu Jacob R. Gardner Stephen Tyree Kilian Q. Weinberger

Support vector machines (SVM) can classify data sets along highly non-linear decision boundaries because of the kernel-trick. This expressiveness comes at a price: During test-time, the SVM classifier needs to compute the kernel innerproduct between a test sample and all support vectors. With large training data sets, the time required for this computation can be substantial. In this paper, we ...

2001
Haixin Ke Xuegong Zhang

A support vector machine constructs an optimal hyperplane from a small set of samples near the boundary. This makes it sensitive to these specific samples and tends to result in machines either too complex with poor generalization ability or too imprecise with high training error, depending on the kernel parameters. In this paper, we present an improved version of the method, called editing sup...

Journal: :Math. Program. 2004
Michael C. Ferris Todd S. Munson

The linear support vector machine can be posed as a quadratic program in a variety of ways. In this paper, we look at a formulation using the two-norm for the misclassification error that leads to a positive definite quadratic program with a single equality constraint when the Wolfe dual is taken. The quadratic term is a small rank update to a positive definite matrix. We reformulate the optima...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید