LD-GAN: Low-Dimensional Generative Adversarial Network for Spectral
Image Generation with Variance Regularization
- URL: http://arxiv.org/abs/2305.00132v1
- Date: Sat, 29 Apr 2023 00:25:02 GMT
- Title: LD-GAN: Low-Dimensional Generative Adversarial Network for Spectral
Image Generation with Variance Regularization
- Authors: Emmanuel Martinez, Roman Jacome, Alejandra Hernandez-Rojas and Henry
Arguello
- Abstract summary: Deep learning methods are state-of-the-art for spectral image (SI) computational tasks.
GANs enable diverse augmentation by learning and sampling from the data distribution.
GAN-based SI generation is challenging since the high-dimensionality nature of this kind of data hinders the convergence of the GAN training yielding to suboptimal generation.
We propose a statistical regularization to control the low-dimensional representation variance for the autoencoder training and to achieve high diversity of samples generated with the GAN.
- Score: 72.4394510913927
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning methods are state-of-the-art for spectral image (SI)
computational tasks. However, these methods are constrained in their
performance since available datasets are limited due to the highly expensive
and long acquisition time. Usually, data augmentation techniques are employed
to mitigate the lack of data. Surpassing classical augmentation methods, such
as geometric transformations, GANs enable diverse augmentation by learning and
sampling from the data distribution. Nevertheless, GAN-based SI generation is
challenging since the high-dimensionality nature of this kind of data hinders
the convergence of the GAN training yielding to suboptimal generation. To
surmount this limitation, we propose low-dimensional GAN (LD-GAN), where we
train the GAN employing a low-dimensional representation of the {dataset} with
the latent space of a pretrained autoencoder network. Thus, we generate new
low-dimensional samples which are then mapped to the SI dimension with the
pretrained decoder network. Besides, we propose a statistical regularization to
control the low-dimensional representation variance for the autoencoder
training and to achieve high diversity of samples generated with the GAN. We
validate our method LD-GAN as data augmentation strategy for compressive
spectral imaging, SI super-resolution, and RBG to spectral tasks with
improvements varying from 0.5 to 1 [dB] in each task respectively. We perform
comparisons against the non-data augmentation training, traditional DA, and
with the same GAN adjusted and trained to generate the full-sized SIs. The code
of this paper can be found in https://github.com/romanjacome99/LD_GAN.git
Related papers
- GIFD: A Generative Gradient Inversion Method with Feature Domain
Optimization [52.55628139825667]
Federated Learning (FL) has emerged as a promising distributed machine learning framework to preserve clients' privacy.
Recent studies find that an attacker can invert the shared gradients and recover sensitive data against an FL system by leveraging pre-trained generative adversarial networks (GAN) as prior knowledge.
We propose textbfGradient textbfInversion over textbfFeature textbfDomains (GIFD), which disassembles the GAN model and searches the feature domains of the intermediate layers.
arXiv Detail & Related papers (2023-08-09T04:34:21Z) - Generative adversarial networks for data-scarce spectral applications [0.0]
We report on an application of GANs in the domain of synthetic spectral data generation.
We show that CWGANs can act as a surrogate model with improved performance in the low-data regime.
arXiv Detail & Related papers (2023-07-14T16:27:24Z) - Latent Space is Feature Space: Regularization Term for GANs Training on
Limited Dataset [1.8634083978855898]
I proposed an additional structure and loss function for GANs called LFM, trained to maximize the feature diversity between the different dimensions of the latent space.
In experiments, this system has been built upon DCGAN and proved to have improvement on Frechet Inception Distance (FID) training from scratch on CelebA dataset.
arXiv Detail & Related papers (2022-10-28T16:34:48Z) - ScoreMix: A Scalable Augmentation Strategy for Training GANs with
Limited Data [93.06336507035486]
Generative Adversarial Networks (GANs) typically suffer from overfitting when limited training data is available.
We present ScoreMix, a novel and scalable data augmentation approach for various image synthesis tasks.
arXiv Detail & Related papers (2022-10-27T02:55:15Z) - Deceive D: Adaptive Pseudo Augmentation for GAN Training with Limited
Data [125.7135706352493]
Generative adversarial networks (GANs) typically require ample data for training in order to synthesize high-fidelity images.
Recent studies have shown that training GANs with limited data remains formidable due to discriminator overfitting.
This paper introduces a novel strategy called Adaptive Pseudo Augmentation (APA) to encourage healthy competition between the generator and the discriminator.
arXiv Detail & Related papers (2021-11-12T18:13:45Z) - Improving Generative Adversarial Networks with Local Coordinate Coding [150.24880482480455]
Generative adversarial networks (GANs) have shown remarkable success in generating realistic data from some predefined prior distribution.
In practice, semantic information might be represented by some latent distribution learned from data.
We propose an LCCGAN model with local coordinate coding (LCC) to improve the performance of generating data.
arXiv Detail & Related papers (2020-07-28T09:17:50Z) - Lessons Learned from the Training of GANs on Artificial Datasets [0.0]
Generative Adversarial Networks (GANs) have made great progress in synthesizing realistic images in recent years.
GANs are prone to underfitting or overfitting, making the analysis of them difficult and constrained.
We train them on artificial datasets where there are infinitely many samples and the real data distributions are simple.
We find that training mixtures of GANs leads to more performance gain compared to increasing the network depth or width.
arXiv Detail & Related papers (2020-07-13T14:51:02Z) - On Leveraging Pretrained GANs for Generation with Limited Data [83.32972353800633]
generative adversarial networks (GANs) can generate highly realistic images, that are often indistinguishable (by humans) from real images.
Most images so generated are not contained in a training dataset, suggesting potential for augmenting training sets with GAN-generated data.
We leverage existing GAN models pretrained on large-scale datasets to introduce additional knowledge, following the concept of transfer learning.
An extensive set of experiments is presented to demonstrate the effectiveness of the proposed techniques on generation with limited data.
arXiv Detail & Related papers (2020-02-26T21:53:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.