Boundary of Distribution Support Generator (BDSG): Sample Generation on
the Boundary
- URL: http://arxiv.org/abs/2107.09950v1
- Date: Wed, 21 Jul 2021 09:00:32 GMT
- Title: Boundary of Distribution Support Generator (BDSG): Sample Generation on
the Boundary
- Authors: Nikolaos Dionelis
- Abstract summary: We use the recently developed Invertible Residual Network (IResNet) and Residual Flow (ResFlow) for density estimation.
These models have not yet been used for anomaly detection.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative models, such as Generative Adversarial Networks (GANs), have been
used for unsupervised anomaly detection. While performance keeps improving,
several limitations exist particularly attributed to difficulties at capturing
multimodal supports and to the ability to approximate the underlying
distribution closer to the tails, i.e. the boundary of the distribution's
support. This paper proposes an approach that attempts to alleviate such
shortcomings. We propose an invertible-residual-network-based model, the
Boundary of Distribution Support Generator (BDSG). GANs generally do not
guarantee the existence of a probability distribution and here, we use the
recently developed Invertible Residual Network (IResNet) and Residual Flow
(ResFlow), for density estimation. These models have not yet been used for
anomaly detection. We leverage IResNet and ResFlow for Out-of-Distribution
(OoD) sample detection and for sample generation on the boundary using a
compound loss function that forces the samples to lie on the boundary. The BDSG
addresses non-convex support, disjoint components, and multimodal
distributions. Results on synthetic data and data from multimodal
distributions, such as MNIST and CIFAR-10, demonstrate competitive performance
compared to methods from the literature.
Related papers
- Theory on Score-Mismatched Diffusion Models and Zero-Shot Conditional Samplers [49.97755400231656]
We present the first performance guarantee with explicit dimensional general score-mismatched diffusion samplers.
We show that score mismatches result in an distributional bias between the target and sampling distributions, proportional to the accumulated mismatch between the target and training distributions.
This result can be directly applied to zero-shot conditional samplers for any conditional model, irrespective of measurement noise.
arXiv Detail & Related papers (2024-10-17T16:42:12Z) - Distribution Fitting for Combating Mode Collapse in Generative
Adversarial Networks [1.5769569085442372]
Mode collapse is a significant unsolved issue of generative adversarial networks.
We propose a global distribution fitting (GDF) method with a penalty term to confine the generated data distribution.
We also propose a local distribution fitting (LDF) method to deal with the circumstance when the overall real data is unreachable.
arXiv Detail & Related papers (2022-12-03T03:39:44Z) - Robust Estimation for Nonparametric Families via Generative Adversarial
Networks [92.64483100338724]
We provide a framework for designing Generative Adversarial Networks (GANs) to solve high dimensional robust statistics problems.
Our work extend these to robust mean estimation, second moment estimation, and robust linear regression.
In terms of techniques, our proposed GAN losses can be viewed as a smoothed and generalized Kolmogorov-Smirnov distance.
arXiv Detail & Related papers (2022-02-02T20:11:33Z) - Investigating Shifts in GAN Output-Distributions [5.076419064097734]
We introduce a loop-training scheme for the systematic investigation of observable shifts between the distributions of real training data and GAN generated data.
Overall, the combination of these methods allows an explorative investigation of innate limitations of current GAN algorithms.
arXiv Detail & Related papers (2021-12-28T09:16:55Z) - GAN Based Boundary Aware Classifier for Detecting Out-of-distribution
Samples [24.572516991009323]
We propose a GAN based boundary aware classifier (GBAC) for generating a closed hyperspace which only contains most id data.
Our method is based on the fact that the traditional neural net seperates the feature space as several unclosed regions which are not suitable for ood detection.
With GBAC as an auxiliary module, the ood data distributed outside the closed hyperspace will be assigned with much lower score, allowing more effective ood detection.
arXiv Detail & Related papers (2021-12-22T03:35:54Z) - Inferential Wasserstein Generative Adversarial Networks [9.859829604054127]
We introduce a novel inferential Wasserstein GAN (iWGAN) model, which is a principled framework to fuse auto-encoders and WGANs.
The iWGAN greatly mitigates the symptom of mode collapse, speeds up the convergence, and is able to provide a measurement of quality check for each individual sample.
arXiv Detail & Related papers (2021-09-13T00:43:21Z) - Tail of Distribution GAN (TailGAN): Generative-
Adversarial-Network-Based Boundary Formation [0.0]
We create a GAN-based tail formation model for anomaly detection, the Tail of distribution GAN (TailGAN)
Using TailGAN, we leverage GANs for anomaly detection and use maximum entropy regularization.
We evaluate TailGAN for identifying Out-of-Distribution (OoD) data and its performance evaluated on MNIST, CIFAR-10, Baggage X-Ray, and OoD data shows competitiveness compared to methods from the literature.
arXiv Detail & Related papers (2021-07-24T17:29:21Z) - GANs with Variational Entropy Regularizers: Applications in Mitigating
the Mode-Collapse Issue [95.23775347605923]
Building on the success of deep learning, Generative Adversarial Networks (GANs) provide a modern approach to learn a probability distribution from observed samples.
GANs often suffer from the mode collapse issue where the generator fails to capture all existing modes of the input distribution.
We take an information-theoretic approach and maximize a variational lower bound on the entropy of the generated samples to increase their diversity.
arXiv Detail & Related papers (2020-09-24T19:34:37Z) - GANs with Conditional Independence Graphs: On Subadditivity of
Probability Divergences [70.30467057209405]
Generative Adversarial Networks (GANs) are modern methods to learn the underlying distribution of a data set.
GANs are designed in a model-free fashion where no additional information about the underlying distribution is available.
We propose a principled design of a model-based GAN that uses a set of simple discriminators on the neighborhoods of the Bayes-net/MRF.
arXiv Detail & Related papers (2020-03-02T04:31:22Z) - When Relation Networks meet GANs: Relation GANs with Triplet Loss [110.7572918636599]
Training stability is still a lingering concern of generative adversarial networks (GANs)
In this paper, we explore a relation network architecture for the discriminator and design a triplet loss which performs better generalization and stability.
Experiments on benchmark datasets show that the proposed relation discriminator and new loss can provide significant improvement on variable vision tasks.
arXiv Detail & Related papers (2020-02-24T11:35:28Z) - Distribution Approximation and Statistical Estimation Guarantees of
Generative Adversarial Networks [82.61546580149427]
Generative Adversarial Networks (GANs) have achieved a great success in unsupervised learning.
This paper provides approximation and statistical guarantees of GANs for the estimation of data distributions with densities in a H"older space.
arXiv Detail & Related papers (2020-02-10T16:47:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.