DA-VEGAN: Differentiably Augmenting VAE-GAN for microstructure
reconstruction from extremely small data sets
- URL: http://arxiv.org/abs/2303.03403v1
- Date: Fri, 17 Feb 2023 08:49:09 GMT
- Title: DA-VEGAN: Differentiably Augmenting VAE-GAN for microstructure
reconstruction from extremely small data sets
- Authors: Yichi Zhang, Paul Seibert, Alexandra Otto, Alexander Ra{\ss}loff,
Marreddy Ambati, Markus K\"astner
- Abstract summary: DA-VEGAN is a model with two central innovations.
A $beta$-variational autoencoder is incorporated into a hybrid GAN architecture.
A custom differentiable data augmentation scheme is developed specifically for this architecture.
- Score: 110.60233593474796
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Microstructure reconstruction is an important and emerging field of research
and an essential foundation to improving inverse computational materials
engineering (ICME). Much of the recent progress in the field is made based on
generative adversarial networks (GANs). Although excellent results have been
achieved throughout a variety of materials, challenges remain regarding the
interpretability of the model's latent space as well as the applicability to
extremely small data sets. The present work addresses these issues by
introducing DA-VEGAN, a model with two central innovations. First, a
$\beta$-variational autoencoder is incorporated into a hybrid GAN architecture
that allows to penalize strong nonlinearities in the latent space by an
additional parameter, $\beta$. Secondly, a custom differentiable data
augmentation scheme is developed specifically for this architecture. The
differentiability allows the model to learn from extremely small data sets
without mode collapse or deteriorated sample quality. An extensive validation
on a variety of structures demonstrates the potential of the method and future
directions of investigation are discussed.
Related papers
- Gradient Reduction Convolutional Neural Network Policy for Financial Deep Reinforcement Learning [0.0]
This paper introduces two significant enhancements to refine our CNN model's predictive performance and robustness for financial data.
Firstly, we integrate a normalization layer at the input stage to ensure consistent feature scaling.
Secondly, we employ a Gradient Reduction Architecture, where earlier layers are wider and subsequent layers are progressively narrower.
arXiv Detail & Related papers (2024-08-16T11:39:03Z) - GenBench: A Benchmarking Suite for Systematic Evaluation of Genomic Foundation Models [56.63218531256961]
We introduce GenBench, a benchmarking suite specifically tailored for evaluating the efficacy of Genomic Foundation Models.
GenBench offers a modular and expandable framework that encapsulates a variety of state-of-the-art methodologies.
We provide a nuanced analysis of the interplay between model architecture and dataset characteristics on task-specific performance.
arXiv Detail & Related papers (2024-06-01T08:01:05Z) - An improved tabular data generator with VAE-GMM integration [9.4491536689161]
We propose a novel Variational Autoencoder (VAE)-based model that addresses limitations of current approaches.
Inspired by the TVAE model, our approach incorporates a Bayesian Gaussian Mixture model (BGM) within the VAE architecture.
We thoroughly validate our model on three real-world datasets with mixed data types, including two medically relevant ones.
arXiv Detail & Related papers (2024-04-12T12:31:06Z) - Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective [64.04617968947697]
We introduce a novel data-model co-design perspective: to promote superior weight sparsity.
Specifically, customized Visual Prompts are mounted to upgrade neural Network sparsification in our proposed VPNs framework.
arXiv Detail & Related papers (2023-12-03T13:50:24Z) - DAPDAG: Domain Adaptation via Perturbed DAG Reconstruction [78.76115370275733]
We learn an auto-encoder that undertakes inference on population statistics given features and reconstruct a directed acyclic graph (DAG) as an auxiliary task.
The underlying DAG structure is assumed invariant among observed variables whose conditional distributions are allowed to vary across domains led by a latent environmental variable $E$.
We train the encoder and decoder jointly in an end-to-end manner and conduct experiments on synthetic and real datasets with mixed variables.
arXiv Detail & Related papers (2022-08-02T11:43:03Z) - Scalable Gaussian Processes for Data-Driven Design using Big Data with
Categorical Factors [14.337297795182181]
Gaussian processes (GP) have difficulties in accommodating big datasets, categorical inputs, and multiple responses.
We propose a GP model that utilizes latent variables and functions obtained through variational inference to address the aforementioned challenges simultaneously.
Our approach is demonstrated for machine learning of ternary oxide materials and topology optimization of a multiscale compliant mechanism.
arXiv Detail & Related papers (2021-06-26T02:17:23Z) - Rethinking Architecture Design for Tackling Data Heterogeneity in
Federated Learning [53.73083199055093]
We show that attention-based architectures (e.g., Transformers) are fairly robust to distribution shifts.
Our experiments show that replacing convolutional networks with Transformers can greatly reduce catastrophic forgetting of previous devices.
arXiv Detail & Related papers (2021-06-10T21:04:18Z) - Demystifying Inductive Biases for $\beta$-VAE Based Architectures [19.53632220171481]
We shed light on the inductive bias responsible for the success of VAE-based architectures.
We show that in classical datasets the structure of variance, induced by the generating factors, is conveniently aligned with the latent directions fostered by the VAE objective.
arXiv Detail & Related papers (2021-02-12T23:57:20Z) - Data-Driven Topology Optimization with Multiclass Microstructures using
Latent Variable Gaussian Process [18.17435834037483]
We develop a multi-response latent-variable Gaussian process (LVGP) model for the microstructure libraries of metamaterials.
The MR-LVGP model embeds the mixed variables into a continuous design space based on their collective effects on the responses.
We show that considering multiclass microstructures can lead to improved performance due to the consistent load-transfer paths for micro- and macro-structures.
arXiv Detail & Related papers (2020-06-27T03:55:52Z) - Recent Developments Combining Ensemble Smoother and Deep Generative
Networks for Facies History Matching [58.720142291102135]
This research project focuses on the use of autoencoders networks to construct a continuous parameterization for facies models.
We benchmark seven different formulations, including VAE, generative adversarial network (GAN), Wasserstein GAN, variational auto-encoding GAN, principal component analysis (PCA) with cycle GAN, PCA with transfer style network and VAE with style loss.
arXiv Detail & Related papers (2020-05-08T21:32:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.