Feature selection by higher criticism thresholding achieves the optimal phase diagram.

نویسندگان

  • David Donoho
  • Jiashun Jin
چکیده

We consider two-class linear classification in a high-dimensional, small-sample-size setting. Only a small fraction of the features are useful, these being unknown to us, and each useful feature contributes weakly to the classification decision. This was called the rare/weak (RW) model in our previous study (Donoho, D. & Jin, J. 2008 Proc. Natl Acad. Sci. USA 105, 14 790-14 795). We select features by thresholding feature Z-scores. The threshold is set by higher criticism (HC). For 1<or=i<or=N, let pi(i) denote the p-value associated with the ith Z-score and pi((i)) denote the ith order statistic of the collection of p-values. The HC threshold (HCT) is the order statistic of the Z-score corresponding to index i maximizing (i/N -- pi(i))/ (square root (i/N (1-i/N))). The ideal threshold optimizes the classification error. In that previous study, we showed that HCT was numerically close to the ideal threshold. We formalize an asymptotic framework for studying the RW model, considering a sequence of problems with increasingly many features and relatively fewer observations. We show that, along this sequence, the limiting performance of ideal HCT is essentially just as good as the limiting performance of ideal thresholding. Our results describe two-dimensional phase space, a two-dimensional diagram with coordinates quantifying 'rare' and 'weak' in the RW model. The phase space can be partitioned into two regions-one where ideal threshold classification is successful, and one where the features are so weak and so rare that it must fail. Surprisingly, the regions where ideal HCT succeeds and fails make exactly the same partition of the phase diagram. Other threshold methods, such as false (feature) discovery rate (FDR) threshold selection, are successful in a substantially smaller region of the phase space than either HCT or ideal thresholding. The FDR and local FDR of the ideal and HC threshold selectors have surprising phase diagrams, which are also described. Results showing the asymptotic equivalence of HCT with ideal HCT can be found in a forthcoming paper (Donoho, D. & Jin, J. In preparation).

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Feature Selection by Higher Criticism Thresholding: Optimal Phase Diagram

We consider two-class linear classification in a high-dimensional, low-sample size setting. Only a small fraction of the features are useful, the useful features are unknown to us, and each useful feature contributes weakly to the classification decision – this setting was called the rare/weak model (RW Model) in [11]. We select features by thresholding feature z-scores. The threshold is set by...

متن کامل

Privacy-Preserving Data Sharing in High Dimensional Regression and Classification Settings

We focus on the problem of multi-party data sharing in high dimensional data settings where the number of measured features (or the dimension) p is frequently much larger than the number of subjects (or the sample size) n, the so-called p n scenario that has been the focus of much recent statistical research. Here, we consider data sharing for two interconnected problems in high dimensional dat...

متن کامل

Higher criticism thresholding: Optimal feature selection when useful features are rare and weak.

In important application fields today-genomics and proteomics are examples-selecting a small subset of useful features is crucial for success of Linear Classification Analysis. We study feature selection by thresholding of feature Z-scores and introduce a principle of threshold selection, based on the notion of higher criticism (HC). For i = 1, 2, ..., p, let pi(i) denote the two-sided P-value ...

متن کامل

Thresholding Methods for Feature Selection in Genomics: Higher Criticism versus False Non-discovery Rates

In high-dimensional genomic analysis it is often necessary to conduct feature selection, in order to improve prediction accuracy and to obtain interpretable classifiers. Traditionally, feature selection relies on computer-intensive procedures such as cross-validation. However, recently two approaches have been advocated that both are computationally more efficient: False Non-Discovery Rates (FN...

متن کامل

Higher Criticism Thresholding: Optimal Feature Selection when Useful Features

Motivated by many ambitious modern applications – genomics and proteomics are examples, we consider a two-class linear classification in high-dimensional, low-sample size setting (a.k.a. p n). We consider the case where among a large number of features (dimensions), only a small fraction of them is useful. The useful features are unknown to us, and each of them contributes weakly to the classif...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • Philosophical transactions. Series A, Mathematical, physical, and engineering sciences

دوره 367 1906  شماره 

صفحات  -

تاریخ انتشار 2009