Unsupervised Domain Transfer with Conditional Invertible Neural Networks
- URL: http://arxiv.org/abs/2303.10191v1
- Date: Fri, 17 Mar 2023 18:00:27 GMT
- Title: Unsupervised Domain Transfer with Conditional Invertible Neural Networks
- Authors: Kris K. Dreher, Leonardo Ayala, Melanie Schellenberg, Marco H\"ubner,
Jan-Hinrich N\"olke, Tim J. Adler, Silvia Seidlitz, Jan Sellner, Alexander
Studier-Fischer, Janek Gr\"ohl, Felix Nickel, Ullrich K\"othe, Alexander
Seitel, Lena Maier-Hein
- Abstract summary: We propose a domain transfer approach based on conditional invertible neural networks (cINNs)
Our method inherently guarantees cycle consistency through its invertible architecture, and network training can efficiently be conducted with maximum likelihood.
Our method enables the generation of realistic spectral data and outperforms the state of the art on two downstream classification tasks.
- Score: 83.90291882730925
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Synthetic medical image generation has evolved as a key technique for neural
network training and validation. A core challenge, however, remains in the
domain gap between simulations and real data. While deep learning-based domain
transfer using Cycle Generative Adversarial Networks and similar architectures
has led to substantial progress in the field, there are use cases in which
state-of-the-art approaches still fail to generate training images that produce
convincing results on relevant downstream tasks. Here, we address this issue
with a domain transfer approach based on conditional invertible neural networks
(cINNs). As a particular advantage, our method inherently guarantees cycle
consistency through its invertible architecture, and network training can
efficiently be conducted with maximum likelihood training. To showcase our
method's generic applicability, we apply it to two spectral imaging modalities
at different scales, namely hyperspectral imaging (pixel-level) and
photoacoustic tomography (image-level). According to comprehensive experiments,
our method enables the generation of realistic spectral data and outperforms
the state of the art on two downstream classification tasks (binary and
multi-class). cINN-based domain transfer could thus evolve as an important
method for realistic synthetic data generation in the field of spectral imaging
and beyond.
Related papers
- Self-supervised Vision Transformer are Scalable Generative Models for Domain Generalization [0.13108652488669734]
We propose a novel generative method for domain generalization in histopathology images.
Our method employs a generative, self-supervised Vision Transformer to dynamically extract characteristics of image patches.
Experiments conducted on two distinct histopathology datasets demonstrate the effectiveness of our proposed approach.
arXiv Detail & Related papers (2024-07-03T08:20:27Z) - Is Synthetic Image Useful for Transfer Learning? An Investigation into Data Generation, Volume, and Utilization [62.157627519792946]
We introduce a novel framework called bridged transfer, which initially employs synthetic images for fine-tuning a pre-trained model to improve its transferability.
We propose dataset style inversion strategy to improve the stylistic alignment between synthetic and real images.
Our proposed methods are evaluated across 10 different datasets and 5 distinct models, demonstrating consistent improvements.
arXiv Detail & Related papers (2024-03-28T22:25:05Z) - A2V: A Semi-Supervised Domain Adaptation Framework for Brain Vessel Segmentation via Two-Phase Training Angiography-to-Venography Translation [4.452428104996953]
We present a semi-supervised domain adaptation framework for brain vessel segmentation from different image modalities.
By relying on annotated angiographies and a limited number of annotated venographies, our framework accomplishes image-to-image translation and semantic segmentation.
arXiv Detail & Related papers (2023-09-12T09:12:37Z) - Photo-realistic Neural Domain Randomization [37.42597274391271]
We show that the recent progress in neural rendering enables a new unified approach we call Photo-realistic Neural Domain Randomization (PNDR)
Our approach is modular, composed of different neural networks for materials, lighting, and rendering, thus enabling randomization of different key image generation components in a differentiable pipeline.
Our experiments show that training with PNDR enables generalization to novel scenes and significantly outperforms the state of the art in terms of real-world transfer.
arXiv Detail & Related papers (2022-10-23T09:45:27Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Deep Domain Adversarial Adaptation for Photon-efficient Imaging Based on
Spatiotemporal Inception Network [11.58898808789911]
In single-photon LiDAR, photon-efficient imaging captures the 3D structure of a scene by only several signal detected per pixel.
Existing deep learning models for this task are trained on simulated datasets, which poses the domain shift challenge when applied to realistic scenarios.
We propose a network (STIN) for photon-efficient imaging, which is able to precisely predict the depth from a sparse and high-noise photon counting histogram by fully exploiting spatial and temporal information.
arXiv Detail & Related papers (2022-01-07T14:51:48Z) - Sharp-GAN: Sharpness Loss Regularized GAN for Histopathology Image
Synthesis [65.47507533905188]
Conditional generative adversarial networks have been applied to generate synthetic histopathology images.
We propose a sharpness loss regularized generative adversarial network to synthesize realistic histopathology images.
arXiv Detail & Related papers (2021-10-27T18:54:25Z) - Joint Learning of Neural Transfer and Architecture Adaptation for Image
Recognition [77.95361323613147]
Current state-of-the-art visual recognition systems rely on pretraining a neural network on a large-scale dataset and finetuning the network weights on a smaller dataset.
In this work, we prove that dynamically adapting network architectures tailored for each domain task along with weight finetuning benefits in both efficiency and effectiveness.
Our method can be easily generalized to an unsupervised paradigm by replacing supernet training with self-supervised learning in the source domain tasks and performing linear evaluation in the downstream tasks.
arXiv Detail & Related papers (2021-03-31T08:15:17Z) - Domain Generalization for Medical Imaging Classification with
Linear-Dependency Regularization [59.5104563755095]
We introduce a simple but effective approach to improve the generalization capability of deep neural networks in the field of medical imaging classification.
Motivated by the observation that the domain variability of the medical images is to some extent compact, we propose to learn a representative feature space through variational encoding.
arXiv Detail & Related papers (2020-09-27T12:30:30Z) - Data Augmentation via Mixed Class Interpolation using Cycle-Consistent
Generative Adversarial Networks Applied to Cross-Domain Imagery [16.870604081967866]
Machine learning driven object detection and classification within non-visible imagery has an important role in many fields.
However, such applications often suffer due to the limited quantity and variety of non-visible spectral domain imagery.
This paper proposes and evaluates a novel data augmentation approach that leverages the more readily available visible-band imagery.
arXiv Detail & Related papers (2020-05-05T18:53:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.