Multiscale Invertible Generative Networks for High-Dimensional Bayesian
Inference
- URL: http://arxiv.org/abs/2105.05489v1
- Date: Wed, 12 May 2021 07:51:47 GMT
- Title: Multiscale Invertible Generative Networks for High-Dimensional Bayesian
Inference
- Authors: Shumao Zhang, Pengchuan Zhang, Thomas Y. Hou
- Abstract summary: We propose a Multiscale Invertible Generative Network (MsIGN) to solve high-dimensional Bayesian inference.
MsIGN exploits the low-dimensional nature of the posterior, and generates samples from coarse to fine scale.
On the natural image synthesis task, MsIGN achieves superior performance in bits-per-dimension over baseline models.
- Score: 9.953855915186352
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a Multiscale Invertible Generative Network (MsIGN) and associated
training algorithm that leverages multiscale structure to solve
high-dimensional Bayesian inference. To address the curse of dimensionality,
MsIGN exploits the low-dimensional nature of the posterior, and generates
samples from coarse to fine scale (low to high dimension) by iteratively
upsampling and refining samples. MsIGN is trained in a multi-stage manner to
minimize the Jeffreys divergence, which avoids mode dropping in
high-dimensional cases. On two high-dimensional Bayesian inverse problems, we
show superior performance of MsIGN over previous approaches in posterior
approximation and multiple mode capture. On the natural image synthesis task,
MsIGN achieves superior performance in bits-per-dimension over baseline models
and yields great interpret-ability of its neurons in intermediate layers.
Related papers
- Diffusion Models for Solving Inverse Problems via Posterior Sampling with Piecewise Guidance [52.705112811734566]
A novel diffusion-based framework is introduced for solving inverse problems using a piecewise guidance scheme.<n>The proposed method is problem-agnostic and readily adaptable to a variety of inverse problems.<n>The framework achieves a reduction in inference time of (25%) for inpainting with both random and center masks, and (23%) and (24%) for (4times) and (8times) super-resolution tasks.
arXiv Detail & Related papers (2025-07-22T19:35:14Z) - Self-Parameterization Based Multi-Resolution Mesh Convolution Networks [0.0]
This paper addresses the challenges of designing mesh convolution neural networks for 3D mesh dense prediction.
The novelty of our approach lies in two key aspects. First, we construct a multi-resolution mesh pyramid directly from the high-resolution input data.
Second, we maintain the high-resolution representation in the multi-resolution convolution network, enabling multi-scale fusions.
arXiv Detail & Related papers (2024-08-25T08:11:22Z) - Multi-scale Unified Network for Image Classification [33.560003528712414]
CNNs face notable challenges in performance and computational efficiency when dealing with real-world, multi-scale image inputs.
We propose Multi-scale Unified Network (MUSN) consisting of multi-scales, a unified network, and scale-invariant constraint.
MUSN yields an accuracy increase up to 44.53% and diminishes FLOPs by 7.01-16.13% in multi-scale scenarios.
arXiv Detail & Related papers (2024-03-27T06:40:26Z) - Hierarchical Integration Diffusion Model for Realistic Image Deblurring [71.76410266003917]
Diffusion models (DMs) have been introduced in image deblurring and exhibited promising performance.
We propose the Hierarchical Integration Diffusion Model (HI-Diff), for realistic image deblurring.
Experiments on synthetic and real-world blur datasets demonstrate that our HI-Diff outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T12:18:20Z) - Multilevel Diffusion: Infinite Dimensional Score-Based Diffusion Models for Image Generation [2.5556910002263984]
Score-based diffusion models (SBDM) have emerged as state-of-the-art approaches for image generation.
This paper develops SBDMs in the infinite-dimensional setting, that is, we model the training data as functions supported on a rectangular domain.
We demonstrate how to overcome two shortcomings of current SBDM approaches in the infinite-dimensional setting.
arXiv Detail & Related papers (2023-03-08T18:10:10Z) - Denoising Diffusion Restoration Models [110.1244240726802]
Denoising Diffusion Restoration Models (DDRM) is an efficient, unsupervised posterior sampling method.
We demonstrate DDRM's versatility on several image datasets for super-resolution, deblurring, inpainting, and colorization.
arXiv Detail & Related papers (2022-01-27T20:19:07Z) - High-dimensional Assisted Generative Model for Color Image Restoration [12.459091135428885]
This work presents an unsupervised deep learning scheme that exploits high-dimensional assisted score-based generative model for color image restoration tasks.
Considering the sample number and internal dimension in score-based generative model, two different high-dimensional ways are proposed: The channel-copy transformation increases the sample number and the pixel-scale transformation decreases feasible dimension space.
To alleviate the difficulty of learning high-dimensional representation, a progressive strategy is proposed to leverage the performance.
arXiv Detail & Related papers (2021-08-14T04:05:29Z) - Manifold Topology Divergence: a Framework for Comparing Data Manifolds [109.0784952256104]
We develop a framework for comparing data manifold, aimed at the evaluation of deep generative models.
Based on the Cross-Barcode, we introduce the Manifold Topology Divergence score (MTop-Divergence)
We demonstrate that the MTop-Divergence accurately detects various degrees of mode-dropping, intra-mode collapse, mode invention, and image disturbance.
arXiv Detail & Related papers (2021-06-08T00:30:43Z) - Bayesian multiscale deep generative model for the solution of
high-dimensional inverse problems [0.0]
A novel multiscale Bayesian inference approach is introduced based on deep probabilistic generative models.
The method allows high-dimensional parameter estimation while exhibiting stability, efficiency and accuracy.
arXiv Detail & Related papers (2021-02-04T11:47:21Z) - Shaping Deep Feature Space towards Gaussian Mixture for Visual
Classification [74.48695037007306]
We propose a Gaussian mixture (GM) loss function for deep neural networks for visual classification.
With a classification margin and a likelihood regularization, the GM loss facilitates both high classification performance and accurate modeling of the feature distribution.
The proposed model can be implemented easily and efficiently without using extra trainable parameters.
arXiv Detail & Related papers (2020-11-18T03:32:27Z) - Deep Autoencoding Topic Model with Scalable Hybrid Bayesian Inference [55.35176938713946]
We develop deep autoencoding topic model (DATM) that uses a hierarchy of gamma distributions to construct its multi-stochastic-layer generative network.
We propose a Weibull upward-downward variational encoder that deterministically propagates information upward via a deep neural network, followed by a downward generative model.
The efficacy and scalability of our models are demonstrated on both unsupervised and supervised learning tasks on big corpora.
arXiv Detail & Related papers (2020-06-15T22:22:56Z) - Multiscale Deep Equilibrium Models [162.15362280927476]
We propose a new class of implicit networks, the multiscale deep equilibrium model (MDEQ)
An MDEQ directly solves for and backpropagates through the equilibrium points of multiple feature resolutions simultaneously.
We illustrate the effectiveness of this approach on two large-scale vision tasks: ImageNet classification and semantic segmentation on high-resolution images from the Cityscapes dataset.
arXiv Detail & Related papers (2020-06-15T18:07:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.