نتایج جستجو برای: divergence time estimation

تعداد نتایج: 2136009  

2007
Daniel F. Schmidt Enes Makalic

This paper examines MMLD-based approximations for the inference of two univariate probability densities: the geometric distribution, parameterised in terms of a mean parameter, and the Poisson distribution. The focus is on both parameter estimation and hypothesis testing properties of the approximation. The new parameter estimators are compared to the MML87 estimators in terms of bias, squared ...

1998
Peter Hall Brett Presnell

Contamination of a sampled distribution, for example by a heavy-tailed distribution, can degrade the performance of a statistical estimator. We suggest a general approach to alleviating this problem, using a version of the weighted bootstrap. The idea is to “tilt” away from the contaminated distribution by a given (but arbitrary) amount, in a direction that minimises a measure of the new distri...

Journal: :Entropy 2011
Marcin Budka Bogdan Gabrys Katarzyna Musial

Generalisation error estimation is an important issue in machine learning. Cross-validation traditionally used for this purpose requires building multiple models and repeating the whole procedure many times in order to produce reliable error estimates. It is however possible to accurately estimate the error using only a single model, if the training and test data are chosen appropriately. This ...

Journal: :CoRR 2018
Kazuto Fukuchi Jun Sakuma

This paper addresses an estimation problem of an additive functional of φ, which is defined as θ(P ;φ) = ∑ k i=1 φ(pi), given n i.i.d. random samples drawn from a discrete distribution P = (p1, ..., pk) with alphabet size k. We have revealed in the previous paper [1] that the minimax optimal rate of this problem is characterized by the divergence speed of the fourth derivative of φ in a range o...

2015
Alon Orlitsky Ananda Theertha Suresh

Estimating distributions over large alphabets is a fundamental machine-learning tenet. Yet no method is known to estimate all distributions well. For example, add-constant estimators are nearly min-max optimal but often perform poorly in practice, and practical estimators such as absolute discounting, Jelinek-Mercer, and Good-Turing are not known to be near optimal for essentially any distribut...

2001
EVARIST GINÉ VLADIMIR KOLTCHINSKII JOEL ZINN

Let fn denote a kernel density estimator of a continuous density f in d dimensions, bounded and positive. Let (t) be a positive continuous function such that ‖ f β‖∞ < ∞ for some 0 < β < 1/2. Under natural smoothness conditions, necessary and sufficient conditions for the sequence √ nhn 2| loghn | ‖ (t)(fn(t)−Efn(t))‖∞ to be stochastically bounded and to converge a.s. to a constant are obtained...

Journal: :IEICE Transactions 2009
Makoto Yamada Masashi Sugiyama

The ratio of two probability densities is called the importance and its estimation has gathered a great deal of attention these days since the importance can be used for various data processing purposes. In this paper, we propose a new importance estimation method using Gaussian mixture models (GMMs). Our method is an extension of the Kullback-Leibler importance estimation procedure (KLIEP), an...

2009
Rudolf Kulhavý

While the general theory of recursive Bayesian estimation of dynamic models is well developed, its practical implementation is restricted to a narrow class of models, typically models with linear dynamics and Gaussian stochastics. The theoretically optimal solution is infeasible for non-linear and/or non-Gaussian models due to its excessive demands on computational memory and time. Parameter es...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید