Fuzzy Generative Adversarial Networks
- URL: http://arxiv.org/abs/2110.14588v1
- Date: Wed, 27 Oct 2021 17:05:06 GMT
- Title: Fuzzy Generative Adversarial Networks
- Authors: Ryan Nguyen, Shubhendu Kumar Singh, and Rahul Rai
- Abstract summary: Generative Adversarial Networks (GANs) are well-known tools for data generation and semi-supervised classification.
This paper introduces techniques that show improvement in the GANs' regression capability through mean absolute error (MAE) and mean squared error (MSE)
We show that adding a fuzzy logic layer can enhance GAN's ability to perform regression; the most desirable injection location is problem-specific.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative Adversarial Networks (GANs) are well-known tools for data
generation and semi-supervised classification. GANs, with less labeled data,
outperform Deep Neural Networks (DNNs) and Convolutional Neural Networks (CNNs)
in classification across various tasks, this shows promise for developing GANs
capable of trespassing into the domain of semi-supervised regression. However,
developing GANs for regression introduce two major challenges: (1) inherent
instability in the GAN formulation and (2) performing regression and achieving
stability simultaneously. This paper introduces techniques that show
improvement in the GANs' regression capability through mean absolute error
(MAE) and mean squared error (MSE). We bake a differentiable fuzzy logic system
at multiple locations in a GAN because fuzzy logic systems have demonstrated
high efficacy in classification and regression settings. The fuzzy logic takes
the output of either or both the generator and the discriminator to either or
both predict the output, $y$, and evaluate the generator's performance. We
outline the results of applying the fuzzy logic system to CGAN and summarize
each approach's efficacy. This paper shows that adding a fuzzy logic layer can
enhance GAN's ability to perform regression; the most desirable injection
location is problem-specific, and we show this through experiments over various
datasets. Besides, we demonstrate empirically that the fuzzy-infused GAN is
competitive with DNNs.
Related papers
- DiffGAN: A Test Generation Approach for Differential Testing of Deep Neural Networks [0.30292136896203486]
DiffGAN is a black-box test image generation approach for differential testing of Deep Neural Networks (DNNs)
It generates diverse and valid triggering inputs that reveal behavioral discrepancies between models.
Our results show DiffGAN significantly outperforms a SOTA baseline, generating four times more triggering inputs, with greater diversity and validity, within the same budget.
arXiv Detail & Related papers (2024-10-15T23:49:01Z) - Boosting Sample Efficiency and Generalization in Multi-agent Reinforcement Learning via Equivariance [34.322201578399394]
Multi-Agent Reinforcement Learning (MARL) struggles with sample inefficiency and poor generalization.
We present Exploration-enhanced Equivariant Graph Neural Networks or E2GN2.
E2GN2 demonstrates a significant improvement in sample efficiency, greater final reward convergence, and a 2x-5x gain in over standard GNNs in our tests.
arXiv Detail & Related papers (2024-10-03T15:25:37Z) - Learning to Reweight for Graph Neural Network [63.978102332612906]
Graph Neural Networks (GNNs) show promising results for graph tasks.
Existing GNNs' generalization ability will degrade when there exist distribution shifts between testing and training graph data.
We propose a novel nonlinear graph decorrelation method, which can substantially improve the out-of-distribution generalization ability.
arXiv Detail & Related papers (2023-12-19T12:25:10Z) - Improved Convergence Guarantees for Shallow Neural Networks [91.3755431537592]
We prove convergence of depth 2 neural networks, trained via gradient descent, to a global minimum.
Our model has the following features: regression with quadratic loss function, fully connected feedforward architecture, RelU activations, Gaussian data instances, adversarial labels.
They strongly suggest that, at least in our model, the convergence phenomenon extends well beyond the NTK regime''
arXiv Detail & Related papers (2022-12-05T14:47:52Z) - On the Effective Number of Linear Regions in Shallow Univariate ReLU
Networks: Convergence Guarantees and Implicit Bias [50.84569563188485]
We show that gradient flow converges in direction when labels are determined by the sign of a target network with $r$ neurons.
Our result may already hold for mild over- parameterization, where the width is $tildemathcalO(r)$ and independent of the sample size.
arXiv Detail & Related papers (2022-05-18T16:57:10Z) - Self-Ensembling GAN for Cross-Domain Semantic Segmentation [107.27377745720243]
This paper proposes a self-ensembling generative adversarial network (SE-GAN) exploiting cross-domain data for semantic segmentation.
In SE-GAN, a teacher network and a student network constitute a self-ensembling model for generating semantic segmentation maps, which together with a discriminator, forms a GAN.
Despite its simplicity, we find SE-GAN can significantly boost the performance of adversarial training and enhance the stability of the model.
arXiv Detail & Related papers (2021-12-15T09:50:25Z) - Inferential Wasserstein Generative Adversarial Networks [9.859829604054127]
We introduce a novel inferential Wasserstein GAN (iWGAN) model, which is a principled framework to fuse auto-encoders and WGANs.
The iWGAN greatly mitigates the symptom of mode collapse, speeds up the convergence, and is able to provide a measurement of quality check for each individual sample.
arXiv Detail & Related papers (2021-09-13T00:43:21Z) - Understanding Overparameterization in Generative Adversarial Networks [56.57403335510056]
Generative Adversarial Networks (GANs) are used to train non- concave mini-max optimization problems.
A theory has shown the importance of the gradient descent (GD) to globally optimal solutions.
We show that in an overized GAN with a $1$-layer neural network generator and a linear discriminator, the GDA converges to a global saddle point of the underlying non- concave min-max problem.
arXiv Detail & Related papers (2021-04-12T16:23:37Z) - DO-GAN: A Double Oracle Framework for Generative Adversarial Networks [28.904057977044374]
We propose a new approach to train Generative Adversarial Networks (GANs)
We deploy a double-oracle framework using the generator and discriminator oracles.
We apply our framework to established GAN architectures such as vanilla GAN, Deep Convolutional GAN, Spectral Normalization GAN and Stacked GAN.
arXiv Detail & Related papers (2021-02-17T05:11:18Z) - Stochastic Graph Neural Networks [123.39024384275054]
Graph neural networks (GNNs) model nonlinear representations in graph data with applications in distributed agent coordination, control, and planning.
Current GNN architectures assume ideal scenarios and ignore link fluctuations that occur due to environment, human factors, or external attacks.
In these situations, the GNN fails to address its distributed task if the topological randomness is not considered accordingly.
arXiv Detail & Related papers (2020-06-04T08:00:00Z) - xAI-GAN: Enhancing Generative Adversarial Networks via Explainable AI
Systems [16.360144499713524]
Generative Adversarial Networks (GANs) are a revolutionary class of Deep Neural Networks (DNNs) that have been successfully used to generate realistic images, music, text, and other data.
We propose a new class of GAN that leverages recent advances in explainable AI (xAI) systems to provide a "richer" form of corrective feedback from discriminators to generators.
We observe xAI-GANs provide an improvement of up to 23.18% in the quality of generated images on both MNIST and FMNIST datasets over standard GANs.
arXiv Detail & Related papers (2020-02-24T18:38:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.