Towards Bridging the Performance Gaps of Joint Energy-based Models
- URL: http://arxiv.org/abs/2209.07959v1
- Date: Fri, 16 Sep 2022 14:19:48 GMT
- Title: Towards Bridging the Performance Gaps of Joint Energy-based Models
- Authors: Xiulong Yang, Qing Su, Shihao Ji
- Abstract summary: Joint Energy-based Model (JEM) achieves high classification accuracy and image generation quality simultaneously.
We introduce a variety of training techniques to bridge the accuracy gap and the generation quality gap of JEM.
Our SADA-JEM achieves state-of-the-art performances and outperforms JEM in image classification, image generation, calibration, out-of-distribution detection and adversarial robustness by a notable margin.
- Score: 1.933681537640272
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Can we train a hybrid discriminative-generative model within a single
network? This question has recently been answered in the affirmative,
introducing the field of Joint Energy-based Model (JEM), which achieves high
classification accuracy and image generation quality simultaneously. Despite
recent advances, there remain two performance gaps: the accuracy gap to the
standard softmax classifier, and the generation quality gap to state-of-the-art
generative models. In this paper, we introduce a variety of training techniques
to bridge the accuracy gap and the generation quality gap of JEM. 1) We
incorporate a recently proposed sharpness-aware minimization (SAM) framework to
train JEM, which promotes the energy landscape smoothness and the
generalizability of JEM. 2) We exclude data augmentation from the maximum
likelihood estimate pipeline of JEM, and mitigate the negative impact of data
augmentation to image generation quality. Extensive experiments on multiple
datasets demonstrate that our SADA-JEM achieves state-of-the-art performances
and outperforms JEM in image classification, image generation, calibration,
out-of-distribution detection and adversarial robustness by a notable margin.
Related papers
- Model Inversion Attacks Through Target-Specific Conditional Diffusion Models [54.69008212790426]
Model inversion attacks (MIAs) aim to reconstruct private images from a target classifier's training set, thereby raising privacy concerns in AI applications.
Previous GAN-based MIAs tend to suffer from inferior generative fidelity due to GAN's inherent flaws and biased optimization within latent space.
We propose Diffusion-based Model Inversion (Diff-MI) attacks to alleviate these issues.
arXiv Detail & Related papers (2024-07-16T06:38:49Z) - DemosaicFormer: Coarse-to-Fine Demosaicing Network for HybridEVS Camera [70.28702677370879]
Hybrid Event-Based Vision Sensor (HybridEVS) is a novel sensor integrating traditional frame-based and event-based sensors.
Despite its potential, the lack of Image signal processing (ISP) pipeline specifically designed for HybridEVS poses a significant challenge.
We propose a coarse-to-fine framework named DemosaicFormer which comprises coarse demosaicing and pixel correction.
arXiv Detail & Related papers (2024-06-12T07:20:46Z) - GECO: Generative Image-to-3D within a SECOnd [51.20830808525894]
We introduce GECO, a novel method for high-quality 3D generative modeling that operates within a second.
GECO achieves high-quality image-to-3D mesh generation with an unprecedented level of efficiency.
arXiv Detail & Related papers (2024-05-30T17:58:00Z) - Enhance Image Classification via Inter-Class Image Mixup with Diffusion Model [80.61157097223058]
A prevalent strategy to bolster image classification performance is through augmenting the training set with synthetic images generated by T2I models.
In this study, we scrutinize the shortcomings of both current generative and conventional data augmentation techniques.
We introduce an innovative inter-class data augmentation method known as Diff-Mix, which enriches the dataset by performing image translations between classes.
arXiv Detail & Related papers (2024-03-28T17:23:45Z) - Class-Prototype Conditional Diffusion Model with Gradient Projection for Continual Learning [20.175586324567025]
Mitigating catastrophic forgetting is a key hurdle in continual learning.
A major issue is the deterioration in the quality of generated data compared to the original.
We propose a GR-based approach for continual learning that enhances image quality in generators.
arXiv Detail & Related papers (2023-12-10T17:39:42Z) - Energy-Calibrated VAE with Test Time Free Lunch [10.698329211674372]
We propose a conditional Energy-Based Model (EBM) for enhancing Variational Autoencoder (VAE)
VAEs often suffer from blurry generated samples due to the lack of a tailored training on the samples generated in the generative direction.
We extend the calibration idea of EC-VAE to variational learning and normalizing flows, and apply EC-VAE to zero-shot image restoration via neural transport prior and range-null theory.
arXiv Detail & Related papers (2023-11-07T15:35:56Z) - Learning Energy-Based Models by Cooperative Diffusion Recovery Likelihood [64.95663299945171]
Training energy-based models (EBMs) on high-dimensional data can be both challenging and time-consuming.
There exists a noticeable gap in sample quality between EBMs and other generative frameworks like GANs and diffusion models.
We propose cooperative diffusion recovery likelihood (CDRL), an effective approach to tractably learn and sample from a series of EBMs.
arXiv Detail & Related papers (2023-09-10T22:05:24Z) - A Bayesian Non-parametric Approach to Generative Models: Integrating
Variational Autoencoder and Generative Adversarial Networks using Wasserstein
and Maximum Mean Discrepancy [2.966338139852619]
Generative adversarial networks (GANs) and variational autoencoders (VAEs) are two of the most prominent and widely studied generative models.
We employ a Bayesian non-parametric (BNP) approach to merge GANs and VAEs.
By fusing the discriminative power of GANs with the reconstruction capabilities of VAEs, our novel model achieves superior performance in various generative tasks.
arXiv Detail & Related papers (2023-08-27T08:58:31Z) - JNDMix: JND-Based Data Augmentation for No-reference Image Quality
Assessment [5.0789200970424035]
We propose effective and general data augmentation based on just noticeable difference (JND) noise mixing for NR-IQA task.
In detail, we randomly inject the JND noise, imperceptible to the human visual system (HVS), into the training image without any adjustment to its label.
Extensive experiments demonstrate that JNDMix significantly improves the performance and data efficiency of various state-of-the-art NR-IQA models.
arXiv Detail & Related papers (2023-02-20T08:55:00Z) - Controllable and Compositional Generation with Latent-Space Energy-Based
Models [60.87740144816278]
Controllable generation is one of the key requirements for successful adoption of deep generative models in real-world applications.
In this work, we use energy-based models (EBMs) to handle compositional generation over a set of attributes.
By composing energy functions with logical operators, this work is the first to achieve such compositionality in generating photo-realistic images of resolution 1024x1024.
arXiv Detail & Related papers (2021-10-21T03:31:45Z) - Generative Max-Mahalanobis Classifiers for Image Classification,
Generation and More [6.89001867562902]
Max-Mahalanobis (MMC) can be trained discriminatively, generatively, or jointly for image classification and generation.
We show that our Generative MMC (GMMC) can be trained discriminatively, generatively, or jointly for image classification and generation.
arXiv Detail & Related papers (2021-01-01T00:42:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.