نتایج جستجو برای: kernel trick

تعداد نتایج: 52726  

2017
Shunfang Wang Bing Nie Kun Yue Yu Fei Wenjia Li Dongshu Xu

Kernel discriminant analysis (KDA) is a dimension reduction and classification algorithm based on nonlinear kernel trick, which can be novelly used to treat high-dimensional and complex biological data before undergoing classification processes such as protein subcellular localization. Kernel parameters make a great impact on the performance of the KDA model. Specifically, for KDA with the popu...

Journal: :Journal of Machine Learning Research 2016
Yu Nishiyama Kenji Fukumizu

We connect shift-invariant characteristic kernels to infinitely divisible distributions on R. Characteristic kernels play an important role in machine learning applications with their kernel means to distinguish any two probability measures. The contribution of this paper is twofold. First, we show, using the Lévy–Khintchine formula, that any shift-invariant kernel given by a bounded, continuou...

Journal: :Applied Soft Computing 2022

Direct multi-task twin support vector machine (DMTSVM) explores the shared information between multiple correlated tasks, then it produces better generalization performance. However, contains matrix inversion operation when solving dual problems, so costs much running time. Moreover, kernel trick cannot be directly utilized in nonlinear case. To effectively avoid above a novel nonparallel (MTNP...

2005
Kiyoung Yang Cyrus Shahabi

Multivariate time series (MTS) data sets are common in various multimedia, medical and financial application domains. These applications perform several data-analysis operations on large number of MTS data sets such as similarity searches, feature-subset-selection, clustering and classification. Inherently, an MTS item has a large number of dimensions. Hence, before applying data mining techniq...

2017
Tu Dinh Nguyen Trung Le Hung Bui Dinh Q. Phung

A typical online kernel learning method faces two fundamental issues: the complexity in dealing with a huge number of observed data points (a.k.a the curse of kernelization) and the difficulty in learning kernel parameters, which often assumed to be fixed. Random Fourier feature is a recent and effective approach to address the former by approximating the shift-invariant kernel function via Boc...

Journal: :CoRR 2017
Shuai Zhang Jianxin Li Pengtao Xie Yingchun Zhang Minglai Shao Haoyi Zhou Mengyi Yan

Kernel methods are powerful tools to capture nonlinear patterns behind data. They implicitly learn high (even infinite) dimensional nonlinear features in the Reproducing Kernel Hilbert Space (RKHS) while making the computation tractable by leveraging the kernel trick. Classic kernel methods learn a single layer of nonlinear features, whose representational power may be limited. Motivated by rec...

Journal: :Neurocomputing 2014
Siamak Mehrkanoon Xiaolin Huang Johan A. K. Suykens

This paper introduces a general framework of non-parallel support vector machines, which involves a regularization term, a scatter loss and a misclassification loss. When dealing with binary problems, the framework with proper losses covers some existing non-parallel classifiers, such as multisurface proximal support vector machine via generalized eigenvalues, twin support vector machines, and ...

2011
Wendelin Böhmer Steffen Grünewälder Hannes Nickisch Klaus Obermayer

This paper develops a kernelized slow feature analysis (SFA) algorithm. SFA is an unsupervised learning method to extract features which encode latent variables from time series. Generative relationships are usually complex, and current algorithms are either not powerful enough or tend to over-fit. We make use of the kernel trick in combination with sparsification to provide a powerful function...

2005
Roberto Basili Marco Cammisa Alessandro Moschitti

Research on document similarity has shown that complex representations are not more accurate than the simple bag-ofwords. Term clustering, e.g. using latent semantic indexing, word co-occurrences or synonym relations using a word ontology have been shown not very effective. In particular, when to extend the similarity function external prior knowledge is used, e.g. WordNet, the retrieval system...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید