xAI-GAN: Enhancing Generative Adversarial Networks via Explainable AI
Systems
- URL: http://arxiv.org/abs/2002.10438v3
- Date: Tue, 29 Mar 2022 15:59:29 GMT
- Title: xAI-GAN: Enhancing Generative Adversarial Networks via Explainable AI
Systems
- Authors: Vineel Nagisetty, Laura Graves, Joseph Scott and Vijay Ganesh
- Abstract summary: Generative Adversarial Networks (GANs) are a revolutionary class of Deep Neural Networks (DNNs) that have been successfully used to generate realistic images, music, text, and other data.
We propose a new class of GAN that leverages recent advances in explainable AI (xAI) systems to provide a "richer" form of corrective feedback from discriminators to generators.
We observe xAI-GANs provide an improvement of up to 23.18% in the quality of generated images on both MNIST and FMNIST datasets over standard GANs.
- Score: 16.360144499713524
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative Adversarial Networks (GANs) are a revolutionary class of Deep
Neural Networks (DNNs) that have been successfully used to generate realistic
images, music, text, and other data. However, GAN training presents many
challenges, notably it can be very resource-intensive. A potential weakness in
GANs is that it requires a lot of data for successful training and data
collection can be an expensive process. Typically, the corrective feedback from
discriminator DNNs to generator DNNs (namely, the discriminator's assessment of
the generated example) is calculated using only one real-numbered value (loss).
By contrast, we propose a new class of GAN we refer to as xAI-GAN that
leverages recent advances in explainable AI (xAI) systems to provide a "richer"
form of corrective feedback from discriminators to generators. Specifically, we
modify the gradient descent process using xAI systems that specify the reason
as to why the discriminator made the classification it did, thus providing the
"richer" corrective feedback that helps the generator to better fool the
discriminator. Using our approach, we observe xAI-GANs provide an improvement
of up to 23.18% in the quality of generated images on both MNIST and FMNIST
datasets over standard GANs as measured by Frechet Inception Distance (FID). We
further compare xAI-GAN trained on 20% of the data with standard GAN trained on
100% of data on the CIFAR10 dataset and find that xAI-GAN still shows an
improvement in FID score. Further, we compare our work with Differentiable
Augmentation - which has been shown to make GANs data-efficient - and show that
xAI-GANs outperform GANs trained on Differentiable Augmentation. Moreover, both
techniques can be combined to produce even better results. Finally, we argue
that xAI-GAN enables users greater control over how models learn than standard
GANs.
Related papers
- SMaRt: Improving GANs with Score Matching Regularity [94.81046452865583]
Generative adversarial networks (GANs) usually struggle in learning from highly diverse data, whose underlying manifold is complex.
We show that score matching serves as a promising solution to this issue thanks to its capability of persistently pushing the generated data points towards the real data manifold.
We propose to improve the optimization of GANs with score matching regularity (SMaRt)
arXiv Detail & Related papers (2023-11-30T03:05:14Z) - LD-GAN: Low-Dimensional Generative Adversarial Network for Spectral
Image Generation with Variance Regularization [72.4394510913927]
Deep learning methods are state-of-the-art for spectral image (SI) computational tasks.
GANs enable diverse augmentation by learning and sampling from the data distribution.
GAN-based SI generation is challenging since the high-dimensionality nature of this kind of data hinders the convergence of the GAN training yielding to suboptimal generation.
We propose a statistical regularization to control the low-dimensional representation variance for the autoencoder training and to achieve high diversity of samples generated with the GAN.
arXiv Detail & Related papers (2023-04-29T00:25:02Z) - How far generated data can impact Neural Networks performance? [2.578242050187029]
We consider how far generated data can aid real data in improving the performance of Neural Networks.
In our experiments, we find out that 5-times more synthetic data to the real FEs dataset increases accuracy by 16%.
arXiv Detail & Related papers (2023-03-27T14:02:43Z) - Augmentation-Aware Self-Supervision for Data-Efficient GAN Training [68.81471633374393]
Training generative adversarial networks (GANs) with limited data is challenging because the discriminator is prone to overfitting.
We propose a novel augmentation-aware self-supervised discriminator that predicts the augmentation parameter of the augmented data.
We compare our method with state-of-the-art (SOTA) methods using the class-conditional BigGAN and unconditional StyleGAN2 architectures.
arXiv Detail & Related papers (2022-05-31T10:35:55Z) - Self-Ensembling GAN for Cross-Domain Semantic Segmentation [107.27377745720243]
This paper proposes a self-ensembling generative adversarial network (SE-GAN) exploiting cross-domain data for semantic segmentation.
In SE-GAN, a teacher network and a student network constitute a self-ensembling model for generating semantic segmentation maps, which together with a discriminator, forms a GAN.
Despite its simplicity, we find SE-GAN can significantly boost the performance of adversarial training and enhance the stability of the model.
arXiv Detail & Related papers (2021-12-15T09:50:25Z) - Rebooting ACGAN: Auxiliary Classifier GANs with Stable Training [45.70113212633225]
Conditional Generative Adversarial Networks (cGAN) generate realistic images by incorporating class information into GAN.
One of the most popular cGANs is an auxiliary classifier GAN with softmax cross-entropy loss (ACGAN)
ACGAN also tends to generate easily classifiable samples with a lack of diversity.
arXiv Detail & Related papers (2021-11-01T17:51:33Z) - Fuzzy Generative Adversarial Networks [0.0]
Generative Adversarial Networks (GANs) are well-known tools for data generation and semi-supervised classification.
This paper introduces techniques that show improvement in the GANs' regression capability through mean absolute error (MAE) and mean squared error (MSE)
We show that adding a fuzzy logic layer can enhance GAN's ability to perform regression; the most desirable injection location is problem-specific.
arXiv Detail & Related papers (2021-10-27T17:05:06Z) - Class Balancing GAN with a Classifier in the Loop [58.29090045399214]
We introduce a novel theoretically motivated Class Balancing regularizer for training GANs.
Our regularizer makes use of the knowledge from a pre-trained classifier to ensure balanced learning of all the classes in the dataset.
We demonstrate the utility of our regularizer in learning representations for long-tailed distributions via achieving better performance than existing approaches over multiple datasets.
arXiv Detail & Related papers (2021-06-17T11:41:30Z) - Training GANs with Stronger Augmentations via Contrastive Discriminator [80.8216679195]
We introduce a contrastive representation learning scheme into the GAN discriminator, coined ContraD.
This "fusion" enables the discriminators to work with much stronger augmentations without increasing their training instability.
Our experimental results show that GANs with ContraD consistently improve FID and IS compared to other recent techniques incorporating data augmentations.
arXiv Detail & Related papers (2021-03-17T16:04:54Z) - On Data Augmentation for GAN Training [39.074761323958406]
We propose Data Augmentation Optimized for GAN (DAG) to enable the use of augmented data in GAN training.
We conduct experiments to apply DAG to different GAN models.
When DAG is used in some GAN models, the system establishes state-of-the-art Frechet Inception Distance (FID) scores.
arXiv Detail & Related papers (2020-06-09T15:19:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.