Latent Space Inference via Paired Autoencoders
- URL: http://arxiv.org/abs/2601.11397v1
- Date: Fri, 16 Jan 2026 16:08:04 GMT
- Title: Latent Space Inference via Paired Autoencoders
- Authors: Emma Hart, Bas Peters, Julianne Chung, Matthias Chung,
- Abstract summary: This work describes a novel data-driven latent space inference framework built on paired autoencoders.<n>Our approach uses two autoencoders, one for the parameter space and one for the observation space, connected by learned mappings between the autoencoders' latent spaces.
- Score: 0.612477318852572
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work describes a novel data-driven latent space inference framework built on paired autoencoders to handle observational inconsistencies when solving inverse problems. Our approach uses two autoencoders, one for the parameter space and one for the observation space, connected by learned mappings between the autoencoders' latent spaces. These mappings enable a surrogate for regularized inversion and optimization in low-dimensional, informative latent spaces. Our flexible framework can work with partial, noisy, or out-of-distribution data, all while maintaining consistency with the underlying physical models. The paired autoencoders enable reconstruction of corrupted data, and then use the reconstructed data for parameter estimation, which produces more accurate reconstructions compared to paired autoencoders alone and end-to-end encoder-decoders of the same architecture, especially in scenarios with data inconsistencies. We demonstrate our approaches on two imaging examples in medical tomography and geophysical seismic-waveform inversion, but the described approaches are broadly applicable to a variety of inverse problems in scientific and engineering applications.
Related papers
- Diffusion Autoencoders with Perceivers for Long, Irregular and Multimodal Astronomical Sequences [47.1547360356314]
We introduce the Diffusion Autoencoder with Perceivers (daep)<n>daep tokenizes heterogeneous measurements, compresses them with a Perceiver encoder, and reconstructs them with a Perceiver-IO diffusion decoder.<n>Across diverse spectroscopic and photometric astronomical datasets, daep achieves lower reconstruction errors, produces more discriminative latent spaces, and better preserves fine-scale structure.
arXiv Detail & Related papers (2025-10-23T14:21:01Z) - Torsion in Persistent Homology and Neural Networks [0.0]
We show that torsion can be lost during encoding, altered in the latent space, and in many cases, not reconstructed by standard decoders.<n>Our findings reveal key limitations of field-based approaches and underline the need for architectures or loss terms that preserve torsional information.
arXiv Detail & Related papers (2025-06-03T16:29:06Z) - Good Things Come in Pairs: Paired Autoencoders for Inverse Problems [0.0]
We focus on the emphpaired autoencoder framework, which has proven to be a powerful tool for solving inverse problems in scientific computing.<n>We illustrate the advantages of this approach through numerical experiments, including seismic imaging and classical inpainting: nonlinear and linear inverse problems.
arXiv Detail & Related papers (2025-05-10T07:31:09Z) - Decoder Decomposition for the Analysis of the Latent Space of Nonlinear Autoencoders With Wind-Tunnel Experimental Data [3.7960472831772765]
The goal of this paper is to propose a method to aid the interpretability of autoencoders.
We propose the decoder decomposition, which is a post-processing method to connect the latent variables to the coherent structures of flows.
The ability to rank and select latent variables will help users design and interpret nonlinear autoencoders.
arXiv Detail & Related papers (2024-04-25T10:09:37Z) - Triple-Encoders: Representations That Fire Together, Wire Together [51.15206713482718]
Contrastive Learning is a representation learning method that encodes relative distances between utterances into the embedding space via a bi-encoder.
This study introduces triple-encoders, which efficiently compute distributed utterance mixtures from these independently encoded utterances.
We find that triple-encoders lead to a substantial improvement over bi-encoders, and even to better zero-shot generalization than single-vector representation models.
arXiv Detail & Related papers (2024-02-19T18:06:02Z) - Complexity Matters: Rethinking the Latent Space for Generative Modeling [65.64763873078114]
In generative modeling, numerous successful approaches leverage a low-dimensional latent space, e.g., Stable Diffusion.
In this study, we aim to shed light on this under-explored topic by rethinking the latent space from the perspective of model complexity.
arXiv Detail & Related papers (2023-07-17T07:12:29Z) - Dataset Condensation with Latent Space Knowledge Factorization and
Sharing [73.31614936678571]
We introduce a novel approach for solving dataset condensation problem by exploiting the regularity in a given dataset.
Instead of condensing the dataset directly in the original input space, we assume a generative process of the dataset with a set of learnable codes.
We experimentally show that our method achieves new state-of-the-art records by significant margins on various benchmark datasets.
arXiv Detail & Related papers (2022-08-21T18:14:08Z) - Symmetric Wasserstein Autoencoders [22.196642357767338]
We introduce a new family of generative autoencoders with a learnable prior, called Symmetric Wasserstein Autoencoders (SWAEs)
We propose to symmetrically match the joint distributions of the observed data and the latent representation induced by the encoder and the decoder.
We empirically show the superior performance of SWAEs over the state-of-the-art generative autoencoders in terms of classification, reconstruction, and generation.
arXiv Detail & Related papers (2021-06-24T13:56:02Z) - Neural Distributed Source Coding [59.630059301226474]
We present a framework for lossy DSC that is agnostic to the correlation structure and can scale to high dimensions.
We evaluate our method on multiple datasets and show that our method can handle complex correlations and state-of-the-art PSNR.
arXiv Detail & Related papers (2021-06-05T04:50:43Z) - On the Encoder-Decoder Incompatibility in Variational Text Modeling and
Beyond [82.18770740564642]
Variational autoencoders (VAEs) combine latent variables with amortized variational inference.
We observe the encoder-decoder incompatibility that leads to poor parameterizations of the data manifold.
We propose Coupled-VAE, which couples a VAE model with a deterministic autoencoder with the same structure.
arXiv Detail & Related papers (2020-04-20T10:34:10Z) - Learning Autoencoders with Relational Regularization [89.53065887608088]
A new framework is proposed for learning autoencoders of data distributions.
We minimize the discrepancy between the model and target distributions, with a emphrelational regularization
We implement the framework with two scalable algorithms, making it applicable for both probabilistic and deterministic autoencoders.
arXiv Detail & Related papers (2020-02-07T17:27:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.