Comments on “A foundational justification for a weighted likelihood approach to inference”, by
نویسندگان
چکیده
Non-additive probability goes back to the very beginning of probability theory— the work of Jacob Bernoulli. Bernoulli’s calculus for combining arguments allowed both sides of a question to attain only small or zero probability, and he also thought the probabilities for two sides might sometimes add to more than one (Shafer 1978). Twentieth-century non-additive probability has roots in both mathematics and statistics. On the mathematical side, it is natural to generalize measuretheoretic probability by interpreting upper and lower bounds on the measure of a non-measurable set as the set’s non-additive “upper and lower probabilities”. On the statistical side, it is natural to try to use the greater flexibility of upper and lower probabilities in an effort to find better solutions to problems of inference. A. P. Dempster (1968) and Peter Walley (1991), perhaps the most influential innovators in this domain, both proposed generalizations of Bayesian inference. In my work on the “Dempster-Shafer theory” in the 1970s and 1980s (Shafer 1976), I called the lower probability (P or P ∗ or Bel) a degree of support or belief. It measures the strength of evidence for an event but does not necessarily have a betting interpretation. The upper probability (P or P∗ or Pl) I called ”plausibility”. An event or proposition is plausible to the extent its denial is
منابع مشابه
Rejoinder on "Likelihood-based belief function: Justification and some extensions to low-quality data"
This note is a rejoinder to comments by Dubois and Moral about my paper “Likelihood-based belief function: justification and some extensions to low-quality data” published in this issue. The main comments concern (1) the axiomatic justification for defining a consonant belief function in the parameter space from the likelihood function and (2) the Bayesian treatment of statistical inference fro...
متن کاملA Bayesian Nominal Regression Model with Random Effects for Analysing Tehran Labor Force Survey Data
Large survey data are often accompanied by sampling weights that reflect the inequality probabilities for selecting samples in complex sampling. Sampling weights act as an expansion factor that, by scaling the subjects, turns the sample into a representative of the community. The quasi-maximum likelihood method is one of the approaches for considering sampling weights in the frequentist framewo...
متن کاملPseudo-Likelihood Inference Underestimates Model Uncertainty: Evidence from Bayesian Nearest Neighbours
When using the K-nearest neighbours (KNN) method, one often ignores the uncertainty in the choice of K. To account for such uncertainty, Bayesian KNN (BKNN) has been proposed and studied (Holmes and Adams 2002 Cucala et al. 2009). We present some evidence to show that the pseudo-likelihood approach for BKNN, even after being corrected by Cucala et al. (2009), still significantly underest...
متن کاملAccurate Inference for the Mean of the Poisson-Exponential Distribution
Although the random sum distribution has been well-studied in probability theory, inference for the mean of such distribution is very limited in the literature. In this paper, two approaches are proposed to obtain inference for the mean of the Poisson-Exponential distribution. Both proposed approaches require the log-likelihood function of the Poisson-Exponential distribution, but the exact for...
متن کاملInference for the Type-II Generalized Logistic Distribution with Progressive Hybrid Censoring
This article presents the analysis of the Type-II hybrid progressively censored data when the lifetime distributions of the items follow Type-II generalized logistic distribution. Maximum likelihood estimators (MLEs) are investigated for estimating the location and scale parameters. It is observed that the MLEs can not be obtained in explicit forms. We provide the approximate maximum likelihood...
متن کامل