نتایج جستجو برای: soft margin

تعداد نتایج: 158468  

Journal: :IEEE Transactions on Information Theory 2002

Journal: :CoRR 2012
Yaman Aksu

Margin maximization in the hard-margin sense, proposed as feature elimination criterion by the MFE-LO method, is combined here with data radius utilization to further aim to lower generalization error, as several published bounds and bound-related formulations pertaining to lowering misclassification risk (or error) pertain to radius e.g. product of squared radius and weight vector squared norm...

1998
Takashi Onoda

Recently ensemble methods like AdaBoost were successfully applied to character recognition tasks, seemingly defying the problems of overrtting. This paper shows that although AdaBoost rarely overrts in the low noise regime it clearly does so for higher noise levels. Central for understanding this fact is the margin distribution and we nd that AdaBoost achieves { doing gradient descent in an err...

2007
Manfred K. Warmuth Karen A. Glocer Gunnar Rätsch

Algorithm 1: SoftBoost 1. Input: S = 〈(x1, y1), . . . , (xN , yN )〉, desired accuracy δ, and capping parameter ν ∈ [1, N ]. 2. Initialize: dn to the uniform distribution 3.Do for t = 1, . . . (a) Train classifier on dt−1 and {u1, . . . ,ut−1} and obtain hypothesis ht. Set un = h(xn)yn. (b) Calculate the edge γt of ht : γt = dt · ut (c) Set γ̂t = (minm=1...t γm)− δ (d) Set γ∗ = solution to the pr...

Journal: :IEEE Trans. Information Theory 2002
John Shawe-Taylor Nello Cristianini

Generalization bounds depending on the margin of a classifier are a relatively recent development. They provide an explanation of the performance of state-of-the-art learning systems such as support vector machines (SVMs) [1] and Adaboost [2]. The difficulty with these bounds has been either their lack of robustness or their looseness. The question of whether the generalization of a classifier ...

2002
Sebastian Risau-Gusman Mirta B. Gordon

Typical learning curves for Soft Margin Classifiers (SMCs) learning both realizable and unrealizable tasks are determined using the tools of Statistical Mechanics. We derive the analytical behaviour of the learning curves in the regimes of small and large training sets. The generalization errors present different decay laws towards the asymptotic values as a function of the training set size, d...

2017
Xuezhi Liang Xiaobo Wang Zhen Lei Shengcai Liao Stan Z. Li

In deep classification, the softmax loss (Softmax) is arguably one of the most commonly used components to train deep convolutional neural networks (CNNs). However, such a widely used loss is limited due to its lack of encouraging the discriminability of features. Recently, the large-margin softmax loss (L-Softmax [14]) is proposed to explicitly enhance the feature discrimination, with hard mar...

Journal: :Neurocomputing 2010
Qinghua Hu Xunjian Che Lei Zhang Daren Yu

Feature selection is considered to be an important preprocessing step in machine learning and pattern recognition, and feature evaluation is the key issue for constructing a feature selection algorithm. In this work, we propose a new concept of neighborhood margin and neighborhood soft margin to measure the minimal distance between different classes. We use the criterion of neighborhood soft ma...

2000
John Shawe-Taylor Nello Cristianini

Generalisation bounds depending on the margin of a classiier are a relatively recent development. They provide an explanation of the performance of state-of-the-art learning systems such as Support Vector Machines (SVM) and Adaboost. The diiculty with these bounds has been either their dependence on the minimal margin or their agnostic form. The paper presents a technique for correcting those p...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید