نتایج جستجو برای: principal component analysispca

تعداد نتایج: 700522  

2000
Michael E. Tipping

'Kernel' principal component analysis (PCA) is an elegant nonlinear generalisation of the popular linear data analysis method, where a kernel function implicitly defines a nonlinear transformation into a feature space wherein standard PCA is performed. Unfortunately, the technique is not 'sparse', since the components thus obtained are expressed in terms of kernels associated with every trainin...

2009
Yue Guan Jennifer G. Dy

Principal component analysis (PCA) is a popular dimensionality reduction algorithm. However, it is not easy to interpret which of the original features are important based on the principal components. Recent methods improve interpretability by sparsifying PCA through adding an L1 regularizer. In this paper, we introduce a probabilistic formulation for sparse PCA. By presenting sparse PCA as a p...

2009
Emmanuel J. Candès Xiaodong Li Yi Ma John Wright

This paper is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individually? We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; ...

Journal: :Technometrics 2013
Christophe Croux Peter Filzmoser Heinrich Fritz

A method for principal component analysis is proposed that is sparse and robust at the same time. The sparsity delivers principal components that have loadings on a small number of variables, making them easier to interpret. The robustness makes the analysis resistant to outlying observations. The principal components correspond to directions that maximize a robust measure of the variance, with...

2014
Yash Deshpande Andrea Montanari Emile Richard

Estimating a vector from noisy quadratic observations is a task that arises naturally in many contexts, from dimensionality reduction, to synchronization and phase retrieval problems. It is often the case that additional information is available about the unknown vector (for instance, sparsity, sign or magnitude of its entries). Many authors propose non-convex quadratic optimization problems th...

2014
Yingyu Liang Maria-Florina Balcan Vandana Kanchanapally David P. Woodruff

We study the distributed computing setting in which there are multiple servers,each holding a set of points, who wish to compute functions on the union of theirpoint sets. A key task in this setting is Principal Component Analysis (PCA), inwhich the servers would like to compute a low dimensional subspace capturing asmuch of the variance of the union of their point sets as possi...

Journal: :Computational Statistics & Data Analysis 2007
Václav Smídl Anthony Quinn

A complete Bayesian framework for Principal Component Analysis (PCA) is proposed in this paper. Previous model-based approaches to PCA were usually based on a factor analysis model with isotropic Gaussian noise. This model does not impose orthogonality constraints, contrary to PCA. In this paper, we propose a new model with orthogonality restrictions, and develop its approximate Bayesian soluti...

2008
Minh Hoai Nguyen Fernando De la Torre

Kernel Principal Component Analysis (KPCA) is a popular generalization of linear PCA that allows non-linear feature extraction. In KPCA, data in the input space is mapped to higher (usually) dimensional feature space where the data can be linearly modeled. The feature space is typically induced implicitly by a kernel function, and linear PCA in the feature space is performed via the kernel tric...

2009

This tutorial is designed to give the reader a short overview of Principal Component Analysis (PCA) using R. PCA is a useful statistical method that has found application in a variety of fields and is a common technique for finding patterns in data of high dimension. Consider we are confronted with the following situation: The data, we want to work with, are in form of a matrix (xij)i=1...N,j=1...

2015
Wenzhuo Yang Huan Xu

1. Preliminaries Theorem A-1. (Theorem 3.1, (Chang, 2012)) Let A ∈ Rm×n be of full column rank with QR factorization A = QR, ∆A be a perturbation in A, and A + ∆A = (Q + ∆Q)(R + ∆R) be the QR-factorization of A + ∆A. Let PA and PA⊥ be the orthogonal projectors onto the range of A and the orthogonal complement of the range of A, respectively. LetQ⊥ be an orthonormal matrix such that matrix [Q,Q⊥...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید