Generative Adversarial Learning via Kernel Density Discrimination
- URL: http://arxiv.org/abs/2107.06197v1
- Date: Tue, 13 Jul 2021 15:52:10 GMT
- Title: Generative Adversarial Learning via Kernel Density Discrimination
- Authors: Abdelhak Lemkhenter, Adam Bielski, Alp Eren Sari, Paolo Favaro
- Abstract summary: We introduce Kernel Density Discrimination GAN (KDD GAN), a novel method for generative adversarial learning.
We define the Kernel Density Estimates directly in feature space and forgo the requirement of invertibility of the kernel feature mappings.
We show a boost in the quality of generated samples with respect to FID from 10% to 40% compared to the baseline.
- Score: 32.91091065436645
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce Kernel Density Discrimination GAN (KDD GAN), a novel method for
generative adversarial learning. KDD GAN formulates the training as a
likelihood ratio optimization problem where the data distributions are written
explicitly via (local) Kernel Density Estimates (KDE). This is inspired by the
recent progress in contrastive learning and its relation to KDE. We define the
KDEs directly in feature space and forgo the requirement of invertibility of
the kernel feature mappings. In our approach, features are no longer optimized
for linear separability, as in the original GAN formulation, but for the more
general discrimination of distributions in the feature space. We analyze the
gradient of our loss with respect to the feature representation and show that
it is better behaved than that of the original hinge loss. We perform
experiments with the proposed KDE-based loss, used either as a training loss or
a regularization term, on both CIFAR10 and scaled versions of ImageNet. We use
BigGAN/SA-GAN as a backbone and baseline, since our focus is not to design the
architecture of the networks. We show a boost in the quality of generated
samples with respect to FID from 10% to 40% compared to the baseline. Code will
be made available.
Related papers
- Invariant Causal Knowledge Distillation in Neural Networks [6.24302896438145]
In this paper, we introduce Invariant Consistency Distillation (ICD), a novel methodology designed to enhance knowledge distillation.
ICD ensures that the student model's representations are both discriminative and invariant with respect to the teacher's outputs.
Our results on CIFAR-100 and ImageNet ILSVRC-2012 show that ICD outperforms traditional KD techniques and surpasses state-of-the-art methods.
arXiv Detail & Related papers (2024-07-16T14:53:35Z) - Equation Discovery with Bayesian Spike-and-Slab Priors and Efficient Kernels [57.46832672991433]
We propose a novel equation discovery method based on Kernel learning and BAyesian Spike-and-Slab priors (KBASS)
We use kernel regression to estimate the target function, which is flexible, expressive, and more robust to data sparsity and noises.
We develop an expectation-propagation expectation-maximization algorithm for efficient posterior inference and function estimation.
arXiv Detail & Related papers (2023-10-09T03:55:09Z) - Learning Distributions via Monte-Carlo Marginalization [9.131712404284876]
We propose a novel method to learn intractable distributions from their samples.
The Monte-Carlo Marginalization (MCMarg) is proposed to address this issue.
The proposed approach is a powerful tool to learn complex distributions and the entire process is differentiable.
arXiv Detail & Related papers (2023-08-11T19:08:06Z) - Fast Private Kernel Density Estimation via Locality Sensitive
Quantization [10.227538355037554]
We study efficient mechanisms for differentially private kernel density estimation (DP-KDE)
We show how the kernel can privately be approximated in time linear in $d$, making it feasible for high-dimensional data.
arXiv Detail & Related papers (2023-07-04T18:48:04Z) - PDE+: Enhancing Generalization via PDE with Adaptive Distributional
Diffusion [66.95761172711073]
generalization of neural networks is a central challenge in machine learning.
We propose to enhance it directly through the underlying function of neural networks, rather than focusing on adjusting input data.
We put this theoretical framework into practice as $textbfPDE+$ ($textbfPDE$ with $textbfA$daptive $textbfD$istributional $textbfD$iffusion)
arXiv Detail & Related papers (2023-05-25T08:23:26Z) - Vector Quantized Wasserstein Auto-Encoder [57.29764749855623]
We study learning deep discrete representations from the generative viewpoint.
We endow discrete distributions over sequences of codewords and learn a deterministic decoder that transports the distribution over the sequences of codewords to the data distribution.
We develop further theories to connect it with the clustering viewpoint of WS distance, allowing us to have a better and more controllable clustering solution.
arXiv Detail & Related papers (2023-02-12T13:51:36Z) - Combating Mode Collapse in GANs via Manifold Entropy Estimation [70.06639443446545]
Generative Adversarial Networks (GANs) have shown compelling results in various tasks and applications.
We propose a novel training pipeline to address the mode collapse issue of GANs.
arXiv Detail & Related papers (2022-08-25T12:33:31Z) - Data-heterogeneity-aware Mixing for Decentralized Learning [63.83913592085953]
We characterize the dependence of convergence on the relationship between the mixing weights of the graph and the data heterogeneity across nodes.
We propose a metric that quantifies the ability of a graph to mix the current gradients.
Motivated by our analysis, we propose an approach that periodically and efficiently optimize the metric.
arXiv Detail & Related papers (2022-04-13T15:54:35Z) - Scaling Structured Inference with Randomization [64.18063627155128]
We propose a family of dynamic programming (RDP) randomized for scaling structured models to tens of thousands of latent states.
Our method is widely applicable to classical DP-based inference.
It is also compatible with automatic differentiation so can be integrated with neural networks seamlessly.
arXiv Detail & Related papers (2021-12-07T11:26:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.