نتایج جستجو برای: gibbs sampling
تعداد نتایج: 219418 فیلتر نتایج به سال:
Inspired by the hierarchical Dirichlet process (HDP), we present a generalized coAT (coauthor Topic) model, also called infinite coAT model, in this paper. The infinite coAT model is a nonparametric extension of the coAT model. And this model can automatically determine the number of topics which are regarded for the probabilistic distribution of words. One does not need to provide prior inform...
This paper describes a method for sampling from a non-standard distribution which is important in both population genetics and directional statistics. Current approaches rely on complicated procedures which do not work well, if at all, in high dimensions and usual parameter set-ups. We use a Gibbs sampler which seems necessary in practical situations of high dimensions.
Let (Ω,F , P ) be a probability space. For each G ⊂ F , define G as the σ-field generated by G and those sets F ∈ F satisfying P (F ) ∈ {0, 1}. Conditions for P to be atomic on ∩i=1Ai, with A1, . . . ,Ak ⊂ F sub-σ-fields, are given. Conditions for P to be 0-1-valued on ∩i=1Ai are given as well. These conditions are useful in various fields, including Gibbs sampling, iterated conditional expecta...
The well-known product partition model (PPM) is considered for the identi8cation of multiple change points in the means and variances of normal data sequences. In a natural fashion, the PPM may provide product estimates of these parameters at each instant of time, as well as the posterior distributions of the partitions and the number of change points. Prior distributions are assumed for the me...
Applications and curricula of decision analysis currently do not include methods to compute Bayes’ rule and obtain posteriors for non-conjugate prior distributions. The current convention is to force the decision maker’s belief to take the form of a conjugate distribution, leading to a suboptimal decision. BUGS software, which uses MCMCmethods, is numerically capable of obtaining posteriors for...
We give a Markov chain that converges to its stationary distribution very slowly. It has the form of a Gibbs sampler running on a posterior distribution of a parameter f3 given data X. Consequences for Gibbs sampling are discussed.
We present a preliminary study on unsupervised preposition sense disambiguation (PSD), comparing different models and training techniques (EM, MAP-EM with L0 norm, Bayesian inference using Gibbs sampling). To our knowledge, this is the first attempt at unsupervised preposition sense disambiguation. Our best accuracy reaches 56%, a significant improvement (at p <.001) of 16% over the most-freque...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید