Black-box α-divergence for Deep Generative Models
نویسندگان
چکیده
We propose using the black-box α-divergence [1] as a flexible alternative to variational inference in deep generative models. By simply switching the objective function from the variational free-energy to the black-box α-divergence objective we are able to learn better generative models, which is demonstrated by a considerable improvement of the test log-likelihood in several preliminary experiments. 1 Generative models and inference networks We consider a probabilistic model for N D-dimensional observations x = {xn}n=1 and assume K-dimensional continuous latent variables z = {zn}n=1, zn ∈ R as follows, p(z) = N (z;0, I) (1) p(x|z, θ) = ∏
منابع مشابه
Information-Theoretic Exploration and Evaluation of Models
No information-theoretic quantity, such as entropy or Kullback-Leibler divergence, is meaningful without first assuming a probabilistic model. In Bayesian statistics, the model itself is uncertain, so the resulting information-theoretic quantities should also be treated as uncertain. Information theory provides a language for asking meaningful decision-theoretic questions about blackbox probabi...
متن کاملLOGAN: Evaluating Privacy Leakage of Generative Models Using Generative Adversarial Networks
Recent advances in machine learning are paving the way for the artificial generation of high quality images and videos. In this paper, we investigate how generating synthetic samples through generative models can lead to information leakage, and, consequently, to privacy breaches affecting individuals’ privacy that contribute their personal or sensitive data to train these models. In order to q...
متن کاملBoosted Generative Models
We propose a novel approach for using unsupervised boosting to create an ensemble of generative models, where models are trained in sequence to correct earlier mistakes. Our metaalgorithmic framework can leverage any existing base learner that permits likelihood evaluation, including recent deep expressive models. Further, our approach allows the ensemble to include discriminative models traine...
متن کاملBlack-Box α-Divergence Minimization
Black-box alpha (BB-α) is a new approximate inference method based on the minimization of α-divergences. BB-α scales to large datasets because it can be implemented using stochastic gradient descent. BB-α can be applied to complex probabilistic models with little effort since it only requires as input the likelihood function and its gradients. These gradients can be easily obtained using automa...
متن کاملLearning to Draw Samples: With Application to Amortized MLE for Generative Adversarial Learning
We propose a simple algorithm to train stochastic neural networks to draw samples from given target distributions for probabilistic inference. Our method is based on iteratively adjusting the neural network parameters so that the output changes along a Stein variational gradient (Liu & Wang, 2016) that maximumly decreases the KL divergence with the target distribution. Our method works for any ...
متن کامل