نتایج جستجو برای: James-Stein estimator
تعداد نتایج: 56551 فیلتر نتایج به سال:
Stein’s result has transformed common belief in statistical world that the maximum likelihood estimator, which is in common use for more than a century, is optimal. Charles Stein showed in 1955 that it is possible to uniformly improve the maximum likelihood estimator (MLE) for the Gaussian model in terms of total squared error risk when several parameters are estimated simultaneously from indep...
In 1961, James and Stein discovered a remarkable estimator that dominates the maximum-likelihood estimate of the mean of a p-variate normal distribution, provided the dimension p is greater than two. This paper extends the James–Stein estimator and highlights benefits of applying these extensions to adaptive signal processing problems. The main contribution of this paper is the derivation of th...
Charles Stein shocked the statistical world in 1955 with his proof that maximum likelihood estimation methods for Gaussian models, in common use for more than a century, were inadmissible beyond simple oneor twodimensional situations. These methods are still in use, for good reasons, but Stein-type estimators have pointed the way toward a radically different empirical Bayes approach to high-dim...
We explore the extension of James-Stein type estimators in a direction that enables them to preserve their superiority when the sample size goes to infinity. Instead of shrinking a base estimator towards a fixed point, we shrink it towards a data-dependent point. We provide an analytic expression for the asymptotic risk and bias of James-Stein type estimators shrunk towards a data-dependent poi...
We explore the extension of James-Stein type estimators in a direction that enables them to preserve their superiority when the sample size goes to infinity. Instead of shrinking a base estimator towards a fixed point, we shrink it towards a data-dependent point, which makes it possible that the “prior” becomes more accurate as the sample size grows. We provide an analytic expression for the as...
We explore the extension of James-Stein type estimators in a direction that enables them to preserve their superiority when the sample size goes to infinity. Instead of shrinking a base estimator towards a fixed point, we shrink it towards a data-dependent point. We provide an analytic expression for the asymptotic risk and bias of James-Stein type estimators shrunk towards a data-dependent poi...
The Reverse Stein Effect is identified and illustrated: A statistician who shrinks his/her data toward a point chosen without reliable knowledge about the underlying value of the parameter to be estimated but based instead upon the observed data will not be protected by the minimax property of shrinkage estimators such a" that of James and Stein, but instead will likely incur a greater error th...
Abstract It is now 62 years since the publication of James and Stein’s seminal article on estimation a multivariate normal mean vector. The paper made spectacular first impression statistical community through its demonstration inadmissability maximum likelihood estimator. continues to be influential, but not for initial reasons. Empirical Bayes shrinkage estimation, major topic, found early ju...
Entropy is a fundamental quantity in statistics and machine learning. In this note, we present a novel procedure for statistical learning of entropy from high-dimensional small-sample data. Specifically, we introduce a a simple yet very powerful small-sample estimator of the Shannon entropy based on James-Stein-type shrinkage. This results in an estimator that is highly efficient statistically ...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید