نتایج جستجو برای: gaussian mixture model

تعداد نتایج: 2218239  

2000
Nir Friedman Iftach Nachman

In this paper we address the problem of learning the structure of a Bayesian network in domains with continuous variables. This task requires a procedure for comparing different candidate structures. In the Bayesian framework, this is done by evaluating the marginal likelihood of the data given a candidate structure. This term can be computed in closed-form for standard parametric families (e.g...

2004
Yuanxin Zhu Yunxin Zhao Kannappan Palaniappan Xiaobo Zhou Xinhua Zhuang

An optimal Bayesian classifier using mixture distribution class models with joint learning of loss and prior probability functions is proposed for automatic land cover classification. The probability distribution for each land cover class is more realistically modeled as a population of Gaussian mixture densities. A novel two-stage learning algorithm is proposed to learn the Gaussian mixture mo...

2012
XuanLong Nguyen

We consider Wasserstein distance functionals for assessing the convergence of latent discrete measures, which serve as mixing distributions in hierarchical and nonparametric mixture models. We clarify the relationships between Wasserstein distances of mixing distributions and f -divergence functionals such as Hellinger and Kullback-Leibler distances on the space of mixture distributions using v...

Journal: :Pattern Recognition 2012
Miguel Lázaro-Gredilla Steven Van Vaerenbergh Neil D. Lawrence

In this work we introduce a mixture of GPs to address the data association problem, i.e. to label a group of observations according to the sources that generated them. Unlike several previously proposed GP mixtures, the novel mixture has the distinct characteristic of using no gating function to determine the association of samples and mixture components. Instead, all the GPs in the mixture are...

2011
XuanLong Nguyen

We consider Wasserstein distance functionals for comparing between and assessing the convergence of latent discrete measures, which serve as mixing distributions in hierarchical and nonparametric mixture models. We explore the space of discrete probability measures metrized by Wasserstein distances, clarify the relationships between Wasserstein distances of mixing distributions and f -divergenc...

2004
Jacob Goldberger Sam T. Roweis

In this paper we propose an efficient algorithm for reducing a large mixture of Gaussians into a smaller mixture while still preserving the component structure of the original model; this is achieved by clustering (grouping) the components. The method minimizes a new, easily computed distance measure between two Gaussian mixtures that can be motivated from a suitable stochastic model and the it...

2008
CHAO MENG AMBAR N. SENGUPTA

We derive explicit formulas for CDO tranche sensitivity to parameter variations, and prove results concerning the qualitative behavior of such tranche sensitivities, for a homogeneous portfolio governed by the onefactor Gaussian copula. Similar results are also derived for a Poisson-mixture model.

2017
Gabriel Parra Felipe Tobar

Initially, multiple-output Gaussian processes models (MOGPs) were constructed as linear combinations of independent, latent, single-output Gaussian processes (GPs). This resulted in cross-covariance functions with limited parametric interpretation, thus conflicting with single-output GPs and their intuitive understanding of lengthscales, frequencies and magnitudes to name but a few. On the cont...

2012
Aditya Tayal Pascal Poupart Yuying Li

We consider an infinite mixture model of Gaussian processes that share mixture components between nonlocal clusters in data. Meeds and Osindero (2006) use a single Dirichlet process prior to specify a mixture of Gaussian processes using an infinite number of experts. In this paper, we extend this approach to allow for experts to be shared non-locally across the input domain. This is accomplishe...

2005
Daniela G. Calò Cinzia Viroli

In this paper we present a strategy for producing low-dimensional projections that maximally separate the classes in Gaussian Mixture Model classification. The most revealing subspaces are those along which the classes are maximally separable. Here we consider a particular probability product kernel as a measure of similarity or affinity between the class conditional distributions. It takes an ...

نمودار تعداد نتایج جستجو در هر سال

با کلیک روی نمودار نتایج را به سال انتشار فیلتر کنید