Adaptive DropBlock Enhanced Generative Adversarial Networks for
Hyperspectral Image Classification
- URL: http://arxiv.org/abs/2201.08938v1
- Date: Sat, 22 Jan 2022 01:43:59 GMT
- Title: Adaptive DropBlock Enhanced Generative Adversarial Networks for
Hyperspectral Image Classification
- Authors: Junjie Wang, Feng Gao, Junyu Dong, Qian Du
- Abstract summary: We propose an Adaptive DropBlock-enhanced Generative Adversarial Networks (ADGAN) for hyperspectral image (HSI) classification.
The discriminator in GAN always contradicts itself and tries to associate fake labels to the minority-class samples, and thus impair the classification performance.
Experimental results on three HSI datasets demonstrated that the proposed ADGAN achieved superior performance over state-of-the-art GAN-based methods.
- Score: 36.679303770326264
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, hyperspectral image (HSI) classification based on generative
adversarial networks (GAN) has achieved great progress. GAN-based
classification methods can mitigate the limited training sample dilemma to some
extent. However, several studies have pointed out that existing GAN-based HSI
classification methods are heavily affected by the imbalanced training data
problem. The discriminator in GAN always contradicts itself and tries to
associate fake labels to the minority-class samples, and thus impair the
classification performance. Another critical issue is the mode collapse in
GAN-based methods. The generator is only capable of producing samples within a
narrow scope of the data space, which severely hinders the advancement of
GAN-based HSI classification methods. In this paper, we proposed an Adaptive
DropBlock-enhanced Generative Adversarial Networks (ADGAN) for HSI
classification. First, to solve the imbalanced training data problem, we adjust
the discriminator to be a single classifier, and it will not contradict itself.
Second, an adaptive DropBlock (AdapDrop) is proposed as a regularization method
employed in the generator and discriminator to alleviate the mode collapse
issue. The AdapDrop generated drop masks with adaptive shapes instead of a
fixed size region, and it alleviates the limitations of DropBlock in dealing
with ground objects with various shapes. Experimental results on three HSI
datasets demonstrated that the proposed ADGAN achieved superior performance
over state-of-the-art GAN-based methods. Our codes are available at
https://github.com/summitgao/HC_ADGAN
Related papers
- Adaptive Margin Global Classifier for Exemplar-Free Class-Incremental Learning [3.4069627091757178]
Existing methods mainly focus on handling biased learning.
We introduce a Distribution-Based Global (DBGC) to avoid bias factors in existing methods, such as data imbalance and sampling.
More importantly, the compromised distributions of old classes are simulated via a simple operation, variance (VE).
This loss is proven equivalent to an Adaptive Margin Softmax Cross Entropy (AMarX)
arXiv Detail & Related papers (2024-09-20T07:07:23Z) - Generative Model Based Noise Robust Training for Unsupervised Domain
Adaptation [108.11783463263328]
This paper proposes a Generative model-based Noise-Robust Training method (GeNRT)
It eliminates domain shift while mitigating label noise.
Experiments on Office-Home, PACS, and Digit-Five show that our GeNRT achieves comparable performance to state-of-the-art methods.
arXiv Detail & Related papers (2023-03-10T06:43:55Z) - Combating Mode Collapse in GANs via Manifold Entropy Estimation [70.06639443446545]
Generative Adversarial Networks (GANs) have shown compelling results in various tasks and applications.
We propose a novel training pipeline to address the mode collapse issue of GANs.
arXiv Detail & Related papers (2022-08-25T12:33:31Z) - Collapse by Conditioning: Training Class-conditional GANs with Limited
Data [109.30895503994687]
We propose a training strategy for conditional GANs (cGANs) that effectively prevents the observed mode-collapse by leveraging unconditional learning.
Our training strategy starts with an unconditional GAN and gradually injects conditional information into the generator and the objective function.
The proposed method for training cGANs with limited data results not only in stable training but also in generating high-quality images.
arXiv Detail & Related papers (2022-01-17T18:59:23Z) - GAN Based Boundary Aware Classifier for Detecting Out-of-distribution
Samples [24.572516991009323]
We propose a GAN based boundary aware classifier (GBAC) for generating a closed hyperspace which only contains most id data.
Our method is based on the fact that the traditional neural net seperates the feature space as several unclosed regions which are not suitable for ood detection.
With GBAC as an auxiliary module, the ood data distributed outside the closed hyperspace will be assigned with much lower score, allowing more effective ood detection.
arXiv Detail & Related papers (2021-12-22T03:35:54Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - Removing Class Imbalance using Polarity-GAN: An Uncertainty Sampling
Approach [0.0]
We propose a Generative Adversarial Network (GAN) equipped with a generator network G, a discriminator network D and a classifier network C to remove the class-imbalance in visual data sets.
We achieve state of the art performance on extreme visual classification task on the FashionMNIST, MNIST, SVHN, ExDark, MVTec Anomaly dataset, Chest X-Ray dataset and others.
arXiv Detail & Related papers (2020-12-09T09:40:07Z) - Conditional Wasserstein GAN-based Oversampling of Tabular Data for
Imbalanced Learning [10.051309746913512]
We propose an oversampling method based on a conditional Wasserstein GAN.
We benchmark our method against standard oversampling methods and the imbalanced baseline on seven real-world datasets.
arXiv Detail & Related papers (2020-08-20T20:33:56Z) - Simple and Effective Prevention of Mode Collapse in Deep One-Class
Classification [93.2334223970488]
We propose two regularizers to prevent hypersphere collapse in deep SVDD.
The first regularizer is based on injecting random noise via the standard cross-entropy loss.
The second regularizer penalizes the minibatch variance when it becomes too small.
arXiv Detail & Related papers (2020-01-24T03:44:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.