نتایج جستجو برای: minimax estimation

تعداد نتایج: 268563  

1996
Jinting Zhang

SUMMARY The minimax kernels for nonparametric function and its derivative estimates are investigated. Our motivation comes from a study of minimax properties of nonparametric kernel estimates of probability densities and their derivatives. The asymptotic expression of the linear maximum risk is established. The corresponding minimax risk depends on the solutions to a kernel variational problem,...

1992
David L. Donoho Iain M. Johnstone Catherine Laredo

We attempt to recover an unknown function from noisy, sampled data. Using orthonormal bases of compactly supported wavelets we develop a nonlinear method which works in the wavelet domain by simple nonlinear shrinkage of the empirical wavelet coe cients. The shrinkage can be tuned to be nearly minimax over any member of a wide range of Triebeland Besov-type smoothness constraints, and asymptoti...

2004
Sam Efromovich S. EFROMOVICH

The theory of adaptive estimation and oracle inequalities for the case of Gaussian-shift–finite-interval experiments has made significant progress in recent years. In particular, sharp-minimax adaptive estimators and exact exponential-type oracle inequalities have been suggested for a vast set of functions including analytic and Sobolev with any positive index as well as for Efromovich–Pinsker ...

2013
T. TONY CAI HARRISON H. ZHOU H. H. ZHOU

This paper considers estimation of sparse covariance matrices and establishes the optimal rate of convergence under a range of matrix operator norm and Bregman divergence losses. A major focus is on the derivation of a rate sharp minimax lower bound. The problem exhibits new features that are significantly different from those that occur in the conventional nonparametric function estimation pro...

2012
Tony Cai Harrison H. Zhou HARRISON H. ZHOU

This paper considers estimation of sparse covariance matrices and establishes the optimal rate of convergence under a range of matrix operator norm and Bregman divergence losses. A major focus is on the derivation of a rate sharp minimax lower bound. The problem exhibits new features that are significantly different from those that occur in the conventional nonparametric function estimation pro...

1997
G. Golubev

The problem of estimation of the nite dimensional parameter in a partial linear model is considered. We derive upper and lower bounds for the second minimax order risk and show that the second order minimax estimator is a penalized maximum likelihood estimator. It is well known that the performance of the estimator is depending on the choice of a smoothing parameter. We propose a practically fe...

2015
Hisayuki Tsukuma HISAYUKI TSUKUMA

This paper addresses the problems of estimating the normal covariance and precision matrices. A commutator subgroup of lower triangular matrices is considered for deriving a class of invariant estimators. The class shows inadmissibility of the best invariant and minimax estimator of the covariance matrix relative to quadratic loss. Also, in estimation of the precision matrix, a dominance result...

2002
Johan Löfberg

A new approach to minimax MPC for systems with bounded external system disturbances and measurement errors is introduced. It is shown that joint deterministic state estimation and minimax MPC can be written as an optimization problem with linear and quadratic matrix inequalities. By linearizing the quadratic matrix inequality, a semidefinite program is obtained. A simulation study indicates tha...

1996
Yazhen Wang

In this article we study function estimation via wavelet shrinkage for data with long-range dependence. We propose a fractional Gaussian noise model to approximate nonparametric regression with long-range dependence and establish asymp-totics for minimax risks. Because of long-range dependence, the minimax risk and the minimax linear risk converge to zero at rates that diier from those for data...

1994
David L. Donoho Iain M. Johnstone

Consider estimating the mean vector from data Nn( ; I) with lq norm loss, q 1, when is known to lie in an n-dimensional lp ball, p 2 (0;1). For large n, the ratio of minimax linear risk to minimax risk can be arbitrarily large if p < q. Obvious exceptions aside, the limiting ratio equals 1 only if p = q = 2. Our arguments are mostly indirect, involving a reduction to a univariate Bayes minimax ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید