A study on three linear discriminant analysis based methods in small sample size problem
نویسندگان
چکیده
In this paper, we make a study on three Linear Discriminant Analysis (LDA) based methods: Regularized Discriminant Analysis (RDA), Discriminant Common Vectors (DCV) and Maximal Margin Criterion (MMC) in the Small Sample Size (SSS) problem. Our contributions are that: 1) we reveal that DCV obtains the same projection subspace as both RDA and wMMC (weighted MMC, a general form of MMC) when RDA’s regularization parameter tends to zero and wMMC’s weight parameter approaches to +∞, which builds on close relationships among these three LDA based methods; 2) we offer efficient algorithms to perform RDA and wMMC in the Principal Component Analysis transformed space, which makes them feasible and efficient to applications such as face recognition; 3) we formulate the eigenvalue distribution of wMMC. On one hand, the formulated eigenvalue distribution can guide practitioners in choosing wMMC’s projection vectors, and on the other hand, the underlying methodology can be employed in analyzing the eigenvalue distribution of matrices such as AAT −BBT , where the rows of A and B are far larger than their columns; and 4) we compare their classification performance on several benchmarks to get that, when the Mean Standard Variance (MSV) criterion is small, DCV can Preprint submitted to Elsevier Science 30 May 2007 obtain competitive classification performance to both RDA and wMMC under optimal parameters, but when MSV is large, DCV generally yields lower classification accuracy than RDA and wMMC under optimal parameters.
منابع مشابه
Feature reduction of hyperspectral images: Discriminant analysis and the first principal component
When the number of training samples is limited, feature reduction plays an important role in classification of hyperspectral images. In this paper, we propose a supervised feature extraction method based on discriminant analysis (DA) which uses the first principal component (PC1) to weight the scatter matrices. The proposed method, called DA-PC1, copes with the small sample size problem and has...
متن کاملDiscriminant Common Vecotors Versus Neighbourhood Components Analysis and Laplacianfaces: A comparative study in small sample size problem
Discriminant Common Vecotors (DCV), Neighbourhood Components Analysis (NCA) and Laplacianfaces (LAP) are three recently proposed methods which can effectively learn linear projection matrices for dimensionality reduction in face recognition, where the dimension of the sample space is typically larger than the number of samples in the training set and consequently the so-called small sample size...
متن کاملDiscriminant common vectors versus neighbourhood components analysis and Laplacianfaces: A comparative study in small sample size problem
Discriminant common vectors (DCV), neighbourhood components analysis (NCA) and Laplacianfaces (LAP) are three recently proposed methods which can effectively learn linear projection matrices for dimensionality reduction in face recognition, where the dimension of the sample space is typically larger than the number of samples in the training set and consequently the so-called small sample size ...
متن کاملFeature extraction of hyperspectral images using boundary semi-labeled samples and hybrid criterion
Feature extraction is a very important preprocessing step for classification of hyperspectral images. The linear discriminant analysis (LDA) method fails to work in small sample size situations. Moreover, LDA has poor efficiency for non-Gaussian data. LDA is optimized by a global criterion. Thus, it is not sufficiently flexible to cope with the multi-modal distributed data. We propose a new fea...
متن کاملGeneralized 2D Fisher Discriminant Analysis
To solve the Small Sample Size (SSS) problem, the recent linear discriminant analysis using the 2D matrix-based data representation model has demonstrated its superiority over that using the conventional vector-based data representation model in face recognition [7]. But the explicit reason why the matrix-based model is better than vectorized model has not been given until now. In this paper, a...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- Pattern Recognition
دوره 41 شماره
صفحات -
تاریخ انتشار 2008