Flow-based Generative Models for Learning Manifold to Manifold Mappings
- URL: http://arxiv.org/abs/2012.10013v2
- Date: Mon, 1 Mar 2021 17:28:12 GMT
- Title: Flow-based Generative Models for Learning Manifold to Manifold Mappings
- Authors: Xingjian Zhen, Rudrasis Chakraborty, Liu Yang, Vikas Singh
- Abstract summary: We introduce three kinds of invertible layers for manifold-valued data, which are analogous to their functionality in flow-based generative models.
We show promising results where we can reliably and accurately reconstruct brain images of a field of orientation distribution functions.
- Score: 39.60406116984869
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many measurements or observations in computer vision and machine learning
manifest as non-Euclidean data. While recent proposals (like spherical CNN)
have extended a number of deep neural network architectures to manifold-valued
data, and this has often provided strong improvements in performance, the
literature on generative models for manifold data is quite sparse. Partly due
to this gap, there are also no modality transfer/translation models for
manifold-valued data whereas numerous such methods based on generative models
are available for natural images. This paper addresses this gap, motivated by a
need in brain imaging -- in doing so, we expand the operating range of certain
generative models (as well as generative models for modality transfer) from
natural images to images with manifold-valued measurements. Our main result is
the design of a two-stream version of GLOW (flow-based invertible generative
models) that can synthesize information of a field of one type of
manifold-valued measurements given another. On the theoretical side, we
introduce three kinds of invertible layers for manifold-valued data, which are
not only analogous to their functionality in flow-based generative models
(e.g., GLOW) but also preserve the key benefits (determinants of the Jacobian
are easy to calculate). For experiments, on a large dataset from the Human
Connectome Project (HCP), we show promising results where we can reliably and
accurately reconstruct brain images of a field of orientation distribution
functions (ODF) from diffusion tensor images (DTI), where the latter has a
$5\times$ faster acquisition time but at the expense of worse angular
resolution.
Related papers
- Sub-graph Based Diffusion Model for Link Prediction [43.15741675617231]
Denoising Diffusion Probabilistic Models (DDPMs) represent a contemporary class of generative models with exceptional qualities.
We build a novel generative model for link prediction using a dedicated design to decompose the likelihood estimation process via the Bayesian formula.
Our proposed method presents numerous advantages: (1) transferability across datasets without retraining, (2) promising generalization on limited training data, and (3) robustness against graph adversarial attacks.
arXiv Detail & Related papers (2024-09-13T02:23:55Z) - Heat Death of Generative Models in Closed-Loop Learning [63.83608300361159]
We study the learning dynamics of generative models that are fed back their own produced content in addition to their original training dataset.
We show that, unless a sufficient amount of external data is introduced at each iteration, any non-trivial temperature leads the model to degenerate.
arXiv Detail & Related papers (2024-04-02T21:51:39Z) - Trajectory-aware Principal Manifold Framework for Data Augmentation and
Image Generation [5.31812036803692]
Many existing methods generate new samples from a parametric distribution, like the Gaussian, with little attention to generate samples along the data manifold in either the input or feature space.
We propose a novel trajectory-aware principal manifold framework to restore the manifold backbone and generate samples along a specific trajectory.
We show that the novel framework is able to extract more compact manifold representation, improve classification accuracy and generate smooth transformation among few samples.
arXiv Detail & Related papers (2023-07-30T07:31:38Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - A Survey on Generative Diffusion Model [75.93774014861978]
Diffusion models are an emerging class of deep generative models.
They have certain limitations, including a time-consuming iterative generation process and confinement to high-dimensional Euclidean space.
This survey presents a plethora of advanced techniques aimed at enhancing diffusion models.
arXiv Detail & Related papers (2022-09-06T16:56:21Z) - Manifold Topology Divergence: a Framework for Comparing Data Manifolds [109.0784952256104]
We develop a framework for comparing data manifold, aimed at the evaluation of deep generative models.
Based on the Cross-Barcode, we introduce the Manifold Topology Divergence score (MTop-Divergence)
We demonstrate that the MTop-Divergence accurately detects various degrees of mode-dropping, intra-mode collapse, mode invention, and image disturbance.
arXiv Detail & Related papers (2021-06-08T00:30:43Z) - BPLF: A Bi-Parallel Linear Flow Model for Facial Expression Generation
from Emotion Set Images [0.0]
Flow-based generative model is a deep learning generative model, which obtains the ability to generate data by explicitly learning the data distribution.
In this paper, a bi-parallel linear flow model for facial emotion generation from emotion set images is constructed.
This paper sorted out the current public data set of facial emotion images, made a new emotion data, and verified the model through this data set.
arXiv Detail & Related papers (2021-05-27T09:37:09Z) - Modeling Shared Responses in Neuroimaging Studies through MultiView ICA [94.31804763196116]
Group studies involving large cohorts of subjects are important to draw general conclusions about brain functional organization.
We propose a novel MultiView Independent Component Analysis model for group studies, where data from each subject are modeled as a linear combination of shared independent sources plus noise.
We demonstrate the usefulness of our approach first on fMRI data, where our model demonstrates improved sensitivity in identifying common sources among subjects.
arXiv Detail & Related papers (2020-06-11T17:29:53Z) - Flows for simultaneous manifold learning and density estimation [12.451050883955071]
manifold-learning flows (M-flows) represent datasets with a manifold structure more faithfully.
M-flows learn the data manifold and allow for better inference than standard flows in the ambient data space.
arXiv Detail & Related papers (2020-03-31T02:07:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.