نتایج جستجو برای: minimum covariance determinant estimator
تعداد نتایج: 267026 فیلتر نتایج به سال:
The sandwich estimator, often known as the robust covariance matrix estimator or the empirical covariance matrix estimator, has achieved increasing use with the growing popularity of generalized estimating equations. Its virtue is that it provides consistent estimates of the covariance matrix for parameter estimates even when a parametric model fails to hold, or is not even specified. Surprisin...
We consider the problem of estimating a random vector x, with covariance uncertainties, that is observed through a known linear transformation H and corrupted by additive noise. We first develop the linear estimator that minimizes the worst-case meansquared error (MSE) across all possible covariance matrices. Although the minimax approach has enjoyed widespread use in the design of robust metho...
We propose a novel algorithm for generalized linear contextual bandits (GLBs) with regret bound sublinear to the time horizon, minimum eigenvalue of covariance contexts and lower variance rewards. In several identified cases, our result is first achieving dimension without discarding observed Previous approaches achieve by rewards, whereas achieves incorporating from all arms in double doubly r...
The proposed estimator is a location and shape estimator which generalizes the L1-idea to a multivariate context. Consider a sample x1, . . . , xn of p-variate observations. Then the estimator is defined as the solution (μ̂, V̂ ) that yields the minimum of the sum of the distances di(μ, V ) = √ (xi − μ)′V −1(xi − μ), minimized under the constraint that V has determinant 1. The constraint det(V ) ...
Regularization is a solution to solve the problem of unstable estimation of covariance matrix with a small sample set in Gaussian classifier. And multi-regularization parameters estimation is more difficult than single parameter estimation. In this paper, KLIM_L covariance matrix estimation is derived theoretically based on MDL (minimum description length) principle for the small sample problem...
Estimation of large covariance matrices has drawn considerable recent attention, and the theoretical focus so far has mainly been on developing a minimax theory over a fixed parameter space. In this paper, we consider adaptive covariance matrix estimation where the goal is to construct a single procedure which is minimax rate optimal simultaneously over each parameter space in a large collectio...
The paper proposes a new covariance estimator for large covariance matrices when the variables have a natural ordering. Using the Cholesky decomposition of the inverse, we impose a banded structure on the Cholesky factor, and select the bandwidth adaptively for each row of the Cholesky factor, using a novel penalty we call nested Lasso. This structure has more flexibility than regular banding, ...
In this paper we propose a new regression interpretation of the Cholesky factor of the covariance matrix, as opposed to the well-known regression interpretation of the Cholesky factor of the inverse covariance, which leads to a new class of regularized covariance estimators suitable for high-dimensional problems. Regularizing the Cholesky factor of the covariance via this regression interpretat...
Estimation of a covariance matrix or its inverse plays a central role in many statistical methods. For these methods to work reliably, estimated matrices must not only be invertible but also well-conditioned. The current paper introduces a novel prior to ensure a well-conditioned maximum a posteriori (MAP) covariance estimate. The prior shrinks the sample covariance estimator towards a stable t...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید