نتایج جستجو برای: posterior distribution
تعداد نتایج: 711755 فیلتر نتایج به سال:
We consider the problem of sequential learning from categorical observations bounded in [0, 1]. We establish an ordering between the Dirichlet posterior over categorical outcomes and a Gaussian posterior under observations with N(0, 1) noise. We establish that, conditioned upon identical data with at least two observations, the posterior mean of the categorical distribution will always second-o...
We place a standard Dirichlet process prior on the inverse-link function of a binary regression model and study the posterior distribution as the prior precision parameter converges to zero. Simple closed-form expressions are available for the limiting posterior density of the regression parameters and the posterior predictive distribution. This limiting posterior exhibits instability as the sa...
At 30 minutes after intravenous administration of the glomerular renal agent TC-99m-DTPA, both right and left lateral views were obtained. We analyzed the ratio of optical densities (behind the ureter/In front of the ureter). In patients without gross renal failure or retroperitoneal disease, the ratio was always less than 1 (range 0.38 to 0.95, mean 0.68). This represents greater perfusi...
McAllester’s PAC-Bayes theorem (strengthened by [4]) characterizes the convergence of a stochastic classifier’s empirical error to its generalization error. Fixed one ”prior” distribution P (h) over hypothesis space H, the theorem can hold for all ”posterior” distribution Q(h) over H simultaneously, so in practice we can find a data-dependent posterior distribution overH as the distribution of ...
We derive the asymptotic approximation for the posterior distribution when the data are multinomial and the prior is Dirichlet conditioned on satisfying a finite set of linear equality and inequality constraints so the posterior is also Dirichlet conditioned on satisfying these same constraints. When only equality constraints are imposed, the asymptotic approximation is normal. Otherwise it is ...
Bayesian belief networks can represent the complicated probabilistic processes that form natural sensory inputs. Once the parameters of the network have been learned, nonlinear inferences about the input can be made by computing the posterior distribution over the hidden units (e.g., depth in stereo vision) given the input. Computing the posterior distribution exactly is not practical in richly...
The posterior distribution of the number of components k in a finite mixture satisfies a set of inequality constraints. The result holds irrespective of the parametric form of the mixture components and under assumptions on the prior distribution weaker than those routinely made in the literature on Bayesian analysis of finite mixtures. The inequality constraints can be used to perform an “inte...
نمودار تعداد نتایج جستجو در هر سال
با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید