Elastic Interaction Energy-Based Generative Model: Approximation in
Feature Space
- URL: http://arxiv.org/abs/2303.10553v1
- Date: Sun, 19 Mar 2023 03:39:31 GMT
- Title: Elastic Interaction Energy-Based Generative Model: Approximation in
Feature Space
- Authors: Chuqi Chen, Yue Wu, Yang Xiang
- Abstract summary: We propose a novel approach to generative modeling using a loss function based on elastic interaction energy (EIE)
The utilization of the EIE-based metric presents several advantages, including its long range property that enables consideration of global information in the distribution.
Experimental results on popular datasets, such as MNIST, FashionMNIST, CIFAR-10, and CelebA, demonstrate that our EIEG GAN model can mitigate mode collapse, enhance stability, and improve model performance.
- Score: 14.783344918500813
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a novel approach to generative modeling using a
loss function based on elastic interaction energy (EIE), which is inspired by
the elastic interaction between defects in crystals. The utilization of the
EIE-based metric presents several advantages, including its long range property
that enables consideration of global information in the distribution. Moreover,
its inclusion of a self-interaction term helps to prevent mode collapse and
captures all modes of distribution. To overcome the difficulty of the
relatively scattered distribution of high-dimensional data, we first map the
data into a latent feature space and approximate the feature distribution
instead of the data distribution. We adopt the GAN framework and replace the
discriminator with a feature transformation network to map the data into a
latent space. We also add a stabilizing term to the loss of the feature
transformation network, which effectively addresses the issue of unstable
training in GAN-based algorithms. Experimental results on popular datasets,
such as MNIST, FashionMNIST, CIFAR-10, and CelebA, demonstrate that our EIEG
GAN model can mitigate mode collapse, enhance stability, and improve model
performance.
Related papers
- Optimal Transport-Based Displacement Interpolation with Data Augmentation for Reduced Order Modeling of Nonlinear Dynamical Systems [0.0]
We present a novel reduced-order Model (ROM) that exploits optimal transport theory and displacement to enhance the representation of nonlinear dynamics in complex systems.
We show improved accuracy and efficiency in predicting complex system behaviors, indicating the potential of this approach for a wide range of applications in computational physics and engineering.
arXiv Detail & Related papers (2024-11-13T16:29:33Z) - Stability and Generalizability in SDE Diffusion Models with Measure-Preserving Dynamics [11.919291977879801]
Inverse problems describe the process of estimating the causal factors from a set of measurements or data.
Diffusion models have shown promise as potent generative tools for solving inverse problems.
arXiv Detail & Related papers (2024-06-19T15:55:12Z) - The Risk of Federated Learning to Skew Fine-Tuning Features and
Underperform Out-of-Distribution Robustness [50.52507648690234]
Federated learning has the risk of skewing fine-tuning features and compromising the robustness of the model.
We introduce three robustness indicators and conduct experiments across diverse robust datasets.
Our approach markedly enhances the robustness across diverse scenarios, encompassing various parameter-efficient fine-tuning methods.
arXiv Detail & Related papers (2024-01-25T09:18:51Z) - Dynamic Kernel-Based Adaptive Spatial Aggregation for Learned Image
Compression [63.56922682378755]
We focus on extending spatial aggregation capability and propose a dynamic kernel-based transform coding.
The proposed adaptive aggregation generates kernel offsets to capture valid information in the content-conditioned range to help transform.
Experimental results demonstrate that our method achieves superior rate-distortion performance on three benchmarks compared to the state-of-the-art learning-based methods.
arXiv Detail & Related papers (2023-08-17T01:34:51Z) - InVAErt networks: a data-driven framework for model synthesis and
identifiability analysis [0.0]
inVAErt is a framework for data-driven analysis and synthesis of physical systems.
It uses a deterministic decoder to represent the forward and inverse maps, a normalizing flow to capture the probabilistic distribution of system outputs, and a variational encoder to learn a compact latent representation for the lack of bijectivity between inputs and outputs.
arXiv Detail & Related papers (2023-07-24T07:58:18Z) - Towards Understanding and Mitigating Dimensional Collapse in Heterogeneous Federated Learning [112.69497636932955]
Federated learning aims to train models across different clients without the sharing of data for privacy considerations.
We study how data heterogeneity affects the representations of the globally aggregated models.
We propose sc FedDecorr, a novel method that can effectively mitigate dimensional collapse in federated learning.
arXiv Detail & Related papers (2022-10-01T09:04:17Z) - Combating Mode Collapse in GANs via Manifold Entropy Estimation [70.06639443446545]
Generative Adversarial Networks (GANs) have shown compelling results in various tasks and applications.
We propose a novel training pipeline to address the mode collapse issue of GANs.
arXiv Detail & Related papers (2022-08-25T12:33:31Z) - Inference-InfoGAN: Inference Independence via Embedding Orthogonal Basis
Expansion [2.198430261120653]
Disentanglement learning aims to construct independent and interpretable latent variables in which generative models are a popular strategy.
We propose a novel GAN-based disentanglement framework via embedding Orthogonal Basis Expansion (OBE) into InfoGAN network.
Our Inference-InfoGAN achieves higher disentanglement score in terms of FactorVAE, Separated ferenceAttribute Predictability (SAP), Mutual Information Gap (MIG) and Variation Predictability (VP) metrics without model fine-tuning.
arXiv Detail & Related papers (2021-10-02T11:54:23Z) - Out-of-distribution Generalization via Partial Feature Decorrelation [72.96261704851683]
We present a novel Partial Feature Decorrelation Learning (PFDL) algorithm, which jointly optimize a feature decomposition network and the target image classification model.
The experiments on real-world datasets demonstrate that our method can improve the backbone model's accuracy on OOD image classification datasets.
arXiv Detail & Related papers (2020-07-30T05:48:48Z) - Model Fusion with Kullback--Leibler Divergence [58.20269014662046]
We propose a method to fuse posterior distributions learned from heterogeneous datasets.
Our algorithm relies on a mean field assumption for both the fused model and the individual dataset posteriors.
arXiv Detail & Related papers (2020-07-13T03:27:45Z) - To Regularize or Not To Regularize? The Bias Variance Trade-off in
Regularized AEs [10.611727286504994]
We study the effect of the latent prior on the generation deterministic quality of AE models.
We show that our model, called FlexAE, is the new state-of-the-art for the AE based generative models.
arXiv Detail & Related papers (2020-06-10T14:00:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.