نتایج جستجو برای: divergence time estimation
تعداد نتایج: 2136009 فیلتر نتایج به سال:
This paper examines MMLD-based approximations for the inference of two univariate probability densities: the geometric distribution, parameterised in terms of a mean parameter, and the Poisson distribution. The focus is on both parameter estimation and hypothesis testing properties of the approximation. The new parameter estimators are compared to the MML87 estimators in terms of bias, squared ...
Contamination of a sampled distribution, for example by a heavy-tailed distribution, can degrade the performance of a statistical estimator. We suggest a general approach to alleviating this problem, using a version of the weighted bootstrap. The idea is to “tilt” away from the contaminated distribution by a given (but arbitrary) amount, in a direction that minimises a measure of the new distri...
Generalisation error estimation is an important issue in machine learning. Cross-validation traditionally used for this purpose requires building multiple models and repeating the whole procedure many times in order to produce reliable error estimates. It is however possible to accurately estimate the error using only a single model, if the training and test data are chosen appropriately. This ...
This paper addresses an estimation problem of an additive functional of φ, which is defined as θ(P ;φ) = ∑ k i=1 φ(pi), given n i.i.d. random samples drawn from a discrete distribution P = (p1, ..., pk) with alphabet size k. We have revealed in the previous paper [1] that the minimax optimal rate of this problem is characterized by the divergence speed of the fourth derivative of φ in a range o...
Estimating distributions over large alphabets is a fundamental machine-learning tenet. Yet no method is known to estimate all distributions well. For example, add-constant estimators are nearly min-max optimal but often perform poorly in practice, and practical estimators such as absolute discounting, Jelinek-Mercer, and Good-Turing are not known to be near optimal for essentially any distribut...
Let fn denote a kernel density estimator of a continuous density f in d dimensions, bounded and positive. Let (t) be a positive continuous function such that ‖ f β‖∞ < ∞ for some 0 < β < 1/2. Under natural smoothness conditions, necessary and sufficient conditions for the sequence √ nhn 2| loghn | ‖ (t)(fn(t)−Efn(t))‖∞ to be stochastically bounded and to converge a.s. to a constant are obtained...
The ratio of two probability densities is called the importance and its estimation has gathered a great deal of attention these days since the importance can be used for various data processing purposes. In this paper, we propose a new importance estimation method using Gaussian mixture models (GMMs). Our method is an extension of the Kullback-Leibler importance estimation procedure (KLIEP), an...
While the general theory of recursive Bayesian estimation of dynamic models is well developed, its practical implementation is restricted to a narrow class of models, typically models with linear dynamics and Gaussian stochastics. The theoretically optimal solution is infeasible for non-linear and/or non-Gaussian models due to its excessive demands on computational memory and time. Parameter es...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید