Sparse Sliced Inverse Regression via Cholesky Matrix Penalization
نویسندگان
چکیده
We introduce a new sparse sliced inverse regression estimator called Cholesky matrix penalization and its adaptive version for achieving sparsity in estimating the dimensions of central subspace. The estimators use decomposition covariance covariates include regularization term objective function to achieve computationally efficient manner. establish theoretical values tuning parameters that estimation variable selection consistency Furthermore, we propose projection information criterion select parameter our proposed prove facilitates consistency. inherits strength Matrix Lasso estimator; it has superior performance numerical studies can be adapted other sufficient dimension methods literature.
منابع مشابه
Sparse Gaussian Process Regression via L1 Penalization
To handle massive data, a variety of sparse Gaussian Process (GP) methods have been proposed to reduce the computational cost. Many of them essentially map the large dataset into a small set of basis points. A common approach to learn these basis points is evidence maximization. Nevertheless, evidence maximization may lead to overfitting and cause a high computational cost. In this paper, we pr...
متن کاملReference curves estimation via Sliced Inverse Regression
In order to obtain reference curves for data sets when the covariate is multidimensional, we propose a new methodology based on dimension-reduction and nonparametric estimation of conditional quantiles. This semiparametric approach combines sliced inverse regression (SIR) and a kernel estimation of conditional quantiles. The convergence of the derived estimator is shown. By a simulation study, ...
متن کاملLocalized Sliced Inverse Regression
We developed localized sliced inverse regression for supervised dimension reduction. It has the advantages of preventing degeneracy, increasing estimation accuracy, and automatic subclass discovery in classification problems. A semisupervised version is proposed for the use of unlabeled data. The utility is illustrated on simulated as well as real data sets.
متن کاملStudent Sliced Inverse Regression
Sliced Inverse Regression (SIR) has been extensively used to reduce the dimension of the predictor space before performing regression. SIR is originally a model free method but it has been shown to actually correspond to the maximum likelihood of an inverse regression model with Gaussian errors. This intrinsic Gaussianity of standard SIR may explain its high sensitivity to outliers as observed ...
متن کاملAsymptotics of Sliced Inverse Regression
Sliced Inverse Regression is a method for reducing the dimension of the explanatory variables x in non-parametric regression problems. Li (1991) discussed a version of this method which begins with a partition of the range of y into slices so that the conditional covariance matrix of x given y can be estimated by the sample covariance matrix within each slice. After that the mean of the conditi...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Statistica Sinica
سال: 2023
ISSN: ['1017-0405', '1996-8507']
DOI: https://doi.org/10.5705/ss.202020.0406