Entropies and Rates of Convergence for Maximum Likelihood and Bayes Estimation for Mixtures of Normal Densities
نویسندگان
چکیده
We study the rates of convergence of the maximum likelihood estimator (MLE) and posterior distribution in density estimation problems, where the densities are location or location-scale mixtures of normal distributions with the scale parameter lying between two positive numbers. The true density is also assumed to lie in this class with the true mixing distribution either compactly supported or having sub-Gaussian tails. We obtain bounds for Hellinger bracketing entropies for this class, and from these bounds, we deduce the convergence rates of (sieve) MLEs in Hellinger distance. The rate turns out to be log n κ/√n, where κ ≥ 1 is a constant that depends on the type of mixtures and the choice of the sieve. Next, we consider a Dirichlet mixture of normals as a prior on the unknown density. We estimate the prior probability of a certain Kullback-Leibler type neighborhood and then invoke a general theorem that computes the posterior convergence rate in terms the growth rate of the Hellinger entropy and the concentration rate of the prior. The posterior distribution is also seen to converge at the rate log n κ/√n in, where κ now depends on the tail behavior of the base measure of the Dirichlet process.
منابع مشابه
Convergence Rates of Parameter Estimation for Some Weakly Identifiable Finite Mixtures by Nhat Ho
We establish minimax lower bounds and maximum likelihood convergence rates of parameter estimation for mean-covariance multivariate Gaussian mixtures, shape-rate Gamma mixtures, and some variants of finite mixture models, including the setting where the number of mixing components is bounded but unknown. These models belong to what we call ”weakly identifiable” classes, which exhibit specific i...
متن کاملEstimating a Bounded Normal Mean Under the LINEX Loss Function
Let X be a random variable from a normal distribution with unknown mean θ and known variance σ2. In many practical situations, θ is known in advance to lie in an interval, say [−m,m], for some m > 0. As the usual estimator of θ, i.e., X under the LINEX loss function is inadmissible, finding some competitors for X becomes worthwhile. The only study in the literature considered the problem of min...
متن کاملImproving the Performance of Bayesian Estimation Methods in Estimations of Shift Point and Comparison with MLE Approach
A Bayesian analysis is used to detect a change-point in a sequence of independent random variables from exponential distributions. In This paper, we try to estimate change point which occurs in any sequence of independent exponential observations. The Bayes estimators are derived for change point, the rate of exponential distribution before shift and the rate of exponential distribution after s...
متن کاملOn strong identifiability and convergence rates of parameter estimation in finite mixtures
Abstract: This paper studies identifiability and convergence behaviors for parameters of multiple types, including matrix-variate ones, that arise in finite mixtures, and the effects of model fitting with extra mixing components. We consider several notions of strong identifiability in a matrix-variate setting, and use them to establish sharp inequalities relating the distance of mixture densit...
متن کاملOn strong identifiability and optimal rates of parameter estimation in finite mixtures
Abstract: This paper studies identifiability and convergence behaviors for parameters of multiple types, including matrix-variate ones, that arise in finite mixtures, and the effects of model fitting with extra mixing components. We consider several notions of strong identifiability in a matrix-variate setting, and use them to establish sharp inequalities relating the distance of mixture densit...
متن کامل