Physically Interpretable Representation Learning with Gaussian Mixture Variational AutoEncoder (GM-VAE)
- URL: http://arxiv.org/abs/2511.21883v1
- Date: Wed, 26 Nov 2025 20:04:38 GMT
- Title: Physically Interpretable Representation Learning with Gaussian Mixture Variational AutoEncoder (GM-VAE)
- Authors: Tiffany Fan, Murray Cutforth, Marta D'Elia, Alexandre Cortiella, Alireza Doostan, Eric Darve,
- Abstract summary: We propose a Variational Autoencoder (GM-VAE) framework designed to extract, physically interpretable representations from high-dimensional scientific data.<n>Unlike conventional VAEs that jointly optimize reconstruction and clustering, our method utilizes a block-coordinate descent strategy.<n>To objectively evaluate the learned representations, we introduce a metric based on graph-Laplacian smoothness, which measures the coherence of physical instability across the latent manifold.
- Score: 37.18249990338269
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Extracting compact, physically interpretable representations from high-dimensional scientific data is a persistent challenge due to the complex, nonlinear structures inherent in physical systems. We propose a Gaussian Mixture Variational Autoencoder (GM-VAE) framework designed to address this by integrating an Expectation-Maximization (EM)-inspired training scheme with a novel spectral interpretability metric. Unlike conventional VAEs that jointly optimize reconstruction and clustering (often leading to training instability), our method utilizes a block-coordinate descent strategy, alternating between expectation and maximization steps. This approach stabilizes training and naturally aligns latent clusters with distinct physical regimes. To objectively evaluate the learned representations, we introduce a quantitative metric based on graph-Laplacian smoothness, which measures the coherence of physical quantities across the latent manifold. We demonstrate the efficacy of this framework on datasets of increasing complexity: surface reaction ODEs, Navier-Stokes wake flows, and experimental laser-induced combustion Schlieren images. The results show that our GM-VAE yields smooth, physically consistent manifolds and accurate regime clustering, offering a robust data-driven tool for interpreting turbulent and reactive flow systems.
Related papers
- SIGMA: Scalable Spectral Insights for LLM Collapse [51.863164847253366]
We introduce SIGMA (Spectral Inequalities for Gram Matrix Analysis), a unified framework for model collapse.<n>By utilizing benchmarks that deriving and deterministic bounds on the matrix's spectrum, SIGMA provides a mathematically grounded metric to track the contraction of the representation space.<n>We demonstrate that SIGMA effectively captures the transition towards states, offering both theoretical insights into the mechanics of collapse.
arXiv Detail & Related papers (2026-01-06T19:47:11Z) - Cluster-Based Generalized Additive Models Informed by Random Fourier Features [19.409397281817288]
This work introduces a mixture of generalized additive models (GAMs) in which random Fourier feature (RFF) representations are leveraged to uncover locally adaptive structure in the data.<n> Numerical experiments on real-world regression benchmarks, including the California Housing, NASA Air Self-Noise, and Bike Sharing datasets, demonstrate improved predictive performance.
arXiv Detail & Related papers (2025-12-22T13:15:52Z) - A Spectral-Grassmann Wasserstein metric for operator representations of dynamical systems [13.799022330476236]
We propose a novel approach representing each system as a distribution of its joint operator eigenvalues and spectral projectors.<n> Experiments on simulated and real-world datasets show that our approach consistently outperforms standard operator-based distances in machine learning applications.
arXiv Detail & Related papers (2025-09-29T15:24:05Z) - Energy-Based Coarse-Graining in Molecular Dynamics: A Flow-Based Framework without Data [0.0]
Coarse-grained (CG) models provide an effective route to reducing the complexity of molecular simulations.<n>We introduce a fully data-free, generative framework for CG that directly targets the all-atom Boltzmann distribution.<n>We show that the method captures all relevant modes of the Boltzmann distribution, reconstructs atomic configurations, and automatically learns physically meaningful CG representations.
arXiv Detail & Related papers (2025-04-29T17:05:27Z) - Efficient Transformed Gaussian Process State-Space Models for Non-Stationary High-Dimensional Dynamical Systems [49.819436680336786]
We propose an efficient transformed Gaussian process state-space model (ETGPSSM) for scalable and flexible modeling of high-dimensional, non-stationary dynamical systems.<n>Specifically, our ETGPSSM integrates a single shared GP with input-dependent normalizing flows, yielding an expressive implicit process prior that captures complex, non-stationary transition dynamics.<n>Our ETGPSSM outperforms existing GPSSMs and neural network-based SSMs in terms of computational efficiency and accuracy.
arXiv Detail & Related papers (2025-03-24T03:19:45Z) - Koopman-Equivariant Gaussian Processes [39.34668284375732]
We propose a family of Gaussian processes (GP) for dynamical systems with linear time-invariant responses.<n>This linearity allows us to tractably quantify forecasting and representational uncertainty.<n>Experiments demonstrate on-par and often better forecasting performance compared to kernel-based methods for learning dynamical systems.
arXiv Detail & Related papers (2025-02-10T16:35:08Z) - Physically Interpretable Representation and Controlled Generation for Turbulence Data [39.42376941186934]
This paper proposes a data-driven approach to encode high-dimensional scientific data into low-dimensional, physically meaningful representations.<n>We validate our approach using 2D Navier-Stokes simulations of flow past a cylinder over a range of Reynolds numbers.
arXiv Detail & Related papers (2025-01-31T17:51:14Z) - Learning Mixtures of Experts with EM: A Mirror Descent Perspective [28.48469221248906]
Classical Mixtures of Experts (MoE) are Machine Learning models that involve the input space, with a separate "expert" model trained on each partition.<n>We study theoretical guarantees of the Expectation Maximization (EM) algorithm for the training of MoE models.
arXiv Detail & Related papers (2024-11-09T03:44:09Z) - Online Variational Sequential Monte Carlo [49.97673761305336]
We build upon the variational sequential Monte Carlo (VSMC) method, which provides computationally efficient and accurate model parameter estimation and Bayesian latent-state inference.
Online VSMC is capable of performing efficiently, entirely on-the-fly, both parameter estimation and particle proposal adaptation.
arXiv Detail & Related papers (2023-12-19T21:45:38Z) - Ensemble Kalman Filtering Meets Gaussian Process SSM for Non-Mean-Field and Online Inference [47.460898983429374]
We introduce an ensemble Kalman filter (EnKF) into the non-mean-field (NMF) variational inference framework to approximate the posterior distribution of the latent states.
This novel marriage between EnKF and GPSSM not only eliminates the need for extensive parameterization in learning variational distributions, but also enables an interpretable, closed-form approximation of the evidence lower bound (ELBO)
We demonstrate that the resulting EnKF-aided online algorithm embodies a principled objective function by ensuring data-fitting accuracy while incorporating model regularizations to mitigate overfitting.
arXiv Detail & Related papers (2023-12-10T15:22:30Z) - Data-heterogeneity-aware Mixing for Decentralized Learning [63.83913592085953]
We characterize the dependence of convergence on the relationship between the mixing weights of the graph and the data heterogeneity across nodes.
We propose a metric that quantifies the ability of a graph to mix the current gradients.
Motivated by our analysis, we propose an approach that periodically and efficiently optimize the metric.
arXiv Detail & Related papers (2022-04-13T15:54:35Z) - Equivariant vector field network for many-body system modeling [65.22203086172019]
Equivariant Vector Field Network (EVFN) is built on a novel equivariant basis and the associated scalarization and vectorization layers.
We evaluate our method on predicting trajectories of simulated Newton mechanics systems with both full and partially observed data.
arXiv Detail & Related papers (2021-10-26T14:26:25Z) - Using Data Assimilation to Train a Hybrid Forecast System that Combines
Machine-Learning and Knowledge-Based Components [52.77024349608834]
We consider the problem of data-assisted forecasting of chaotic dynamical systems when the available data is noisy partial measurements.
We show that by using partial measurements of the state of the dynamical system, we can train a machine learning model to improve predictions made by an imperfect knowledge-based model.
arXiv Detail & Related papers (2021-02-15T19:56:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.