Improving Variational Inference with Inverse Autoregressive Flow
نویسندگان
چکیده
We propose a simple and practical method for improving the flexibility of the approximate posterior in variational auto-encoders (VAEs) through a transformation with autoregressive networks. Autoregressive networks, such as RNNs and RNADE networks, are very powerful models. However, their sequential nature makes them impractical for direct use with VAEs, as sequentially sampling the latent variables is slow when implemented on a GPU. Fortunately, we find that by inverting autoregressive networks we can obtain equally powerful data transformations that can be computed in parallel. We call these data transformations inverse autoregressive flows (IAF), and we show that they can be used to transform a simple distribution over the latent variables into a much more flexible distribution, while still allowing us to compute the resulting variables’ probability density function. The method is computationally cheap, can be made arbitrarily flexible, and (in contrast with previous work) is naturally applicable to latent variables that are organized in multidimensional tensors, such as 2D grids or time series. The method is applied to a novel deep architecture of variational auto-encoders. In experiments we demonstrate that autoregressive flow leads to significant performance gains when applied to variational autoencoders for natural images.
منابع مشابه
Sylvester Normalizing Flows for Variational Inference
Variational inference relies on flexible approximate posterior distributions. Normalizing flows provide a general recipe to construct flexible variational posteriors. We introduce Sylvester normalizing flows, which can be seen as a generalization of planar flows. Sylvester normalizing flows remove the well-known single-unit bottleneck from planar flows, making a single transformation much more ...
متن کاملLearnable Explicit Density for Continuous Latent Space and Variational Inference
In this paper, we study two aspects of the variational autoencoder (VAE): the prior distribution over the latent variables and its corresponding posterior. First, we decompose the learning of VAEs into layerwise density estimation, and argue that having a flexible prior is beneficial to both sample generation and inference. Second, we analyze the family of inverse autoregressive flows (inverse ...
متن کاملfor “ Masked Autoregressive Flow for Density Estimation ”
Suppose now that we wish to fit the implicit density pu(u) to the base density πu(u) by minimizing the above KL. This corresponds exactly to the objective minimized when employing IAF as a recognition network in stochastic variational inference [7], where πu(u) would be the (typically intractable) posterior. The first step in stochastic variational inference would be to rewrite the expectation ...
متن کاملSemi-Amortized Variational Autoencoders
Amortized variational inference (AVI) replaces instance-specific local inference with a global inference network. While AVI has enabled efficient training of deep generative models such as variational autoencoders (VAE), recent empirical work suggests that inference networks can produce suboptimal variational parameters. We propose a hybrid approach, to use AVI to initialize the variational par...
متن کاملStatistical Inference in Autoregressive Models with Non-negative Residuals
Normal residual is one of the usual assumptions of autoregressive models but in practice sometimes we are faced with non-negative residuals case. In this paper we consider some autoregressive models with non-negative residuals as competing models and we have derived the maximum likelihood estimators of parameters based on the modified approach and EM algorithm for the competing models. Also,...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- CoRR
دوره abs/1606.04934 شماره
صفحات -
تاریخ انتشار 2016