نتایج جستجو برای: overextended margin

تعداد نتایج: 34233  

Journal: :CoRR 2012
Guangxu Guo Songcan Chen

Schapire’s margin theory provides a theoretical explanation to the success of boosting-type methods and manifests that a good margin distribution (MD) of training samples is essential for generalization. However the statement that a MD is good is vague, consequently, many recently developed algorithms try to generate a MD in their goodness senses for boosting generalization. Unlike their indire...

2009
Xu Miao Rajesh P. N. Rao

Boltzmann Machines are a powerful class of undirected graphical models. Originally proposed as artificial neural networks, they can be regarded as a type of Markov Random Field in which the connection weights between nodes are symmetric and learned from data. They are also closely related to recent models such as Markov logic networks and Conditional RandomFields. Amajor challenge for Boltzmann...

2003
Ben Taskar Carlos Guestrin Daphne Koller

In typical classification tasks, we seek a function which assigns a label to a single object. Kernel-based approaches, such as support vector machines (SVMs), which maximize the margin of confidence of the classifier, are the method of choice for many such tasks. Their popularity stems both from the ability to use high-dimensional feature spaces, and from their strong theoretical guarantees. Ho...

2004
Ben Taskar Dan Klein Michael Collins Daphne Koller Christopher D. Manning

We present a novel discriminative approach to parsing inspired by the large-margin criterion underlying support vector machines. Our formulation uses a factorization analogous to the standard dynamic programs for parsing. In particular, it allows one to efficiently learn a model which discriminates among the entire space of parse trees, as opposed to reranking the top few candidates. Our models...

2008
Pannagadatta K. Shivaswamy Tony Jebara

In classification problems, Support Vector Machines maximize the margin of separation between two classes. While the paradigm has been successful, the solution obtained by SVMs is dominated by the directions with large data spread and biased to separate the classes by cutting along large spread directions. This article proposes a novel formulation to overcome such sensitivity and maximizes the ...

2008
Gal Chechik

A fundamental problem in machine learning is to extract compact but relevant representations of empirical data. Relevance can be measured by the ability to make good decisions based on the representations, for example in terms of classification accuracy. Compact representations can lead to more human-interpretable models, as well as improve scalability. Furthermore, in multi-class and multi-tas...

2009
Andrew G. Howard

Large Margin Transformation Learning

2000
Thore Graepel Ralf Herbrich Robert C. Williamson

We present an improvement of Novikoff's perceptron convergence theorem. Reinterpreting this mistake bound as a margin dependent sparsity guarantee allows us to give a PAC-style generalisation error bound for the classifier learned by the perceptron learning algorithm. The bound value crucially depends on the margin a support vector machine would achieve on the same data set using the same kerne...

2012
Suicheng Gu Yuhong Guo

In this paper, we investigate the problem of exploiting global information to improve the performance of SVMs on large scale classification problems. We first present a unified general framework for the existing min-max machine methods in terms of within-class dispersions and between-class dispersions. By defining a new within-class dispersion measure, we then propose a novel max-margin ratio m...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید