Decoder Decomposition for the Analysis of the Latent Space of Nonlinear Autoencoders With Wind-Tunnel Experimental Data
- URL: http://arxiv.org/abs/2404.19660v1
- Date: Thu, 25 Apr 2024 10:09:37 GMT
- Title: Decoder Decomposition for the Analysis of the Latent Space of Nonlinear Autoencoders With Wind-Tunnel Experimental Data
- Authors: Yaxin Mo, Tullio Traverso, Luca Magri,
- Abstract summary: The goal of this paper is to propose a method to aid the interpretability of autoencoders.
We propose the decoder decomposition, which is a post-processing method to connect the latent variables to the coherent structures of flows.
The ability to rank and select latent variables will help users design and interpret nonlinear autoencoders.
- Score: 3.7960472831772765
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Turbulent flows are chaotic and multi-scale dynamical systems, which have large numbers of degrees of freedom. Turbulent flows, however, can be modelled with a smaller number of degrees of freedom when using the appropriate coordinate system, which is the goal of dimensionality reduction via nonlinear autoencoders. Autoencoders are expressive tools, but they are difficult to interpret. The goal of this paper is to propose a method to aid the interpretability of autoencoders. This is the decoder decomposition. First, we propose the decoder decomposition, which is a post-processing method to connect the latent variables to the coherent structures of flows. Second, we apply the decoder decomposition to analyse the latent space of synthetic data of a two-dimensional unsteady wake past a cylinder. We find that the dimension of latent space has a significant impact on the interpretability of autoencoders. We identify the physical and spurious latent variables. Third, we apply the decoder decomposition to the latent space of wind-tunnel experimental data of a three-dimensional turbulent wake past a bluff body. We show that the reconstruction error is a function of both the latent space dimension and the decoder size, which are correlated. Finally, we apply the decoder decomposition to rank and select latent variables based on the coherent structures that they represent. This is useful to filter unwanted or spurious latent variables, or to pinpoint specific coherent structures of interest. The ability to rank and select latent variables will help users design and interpret nonlinear autoencoders.
Related papers
- Rank Reduction Autoencoders -- Enhancing interpolation on nonlinear manifolds [3.180674374101366]
Rank Reduction Autoencoder (RRAE) is an autoencoder with an enlarged latent space.
Two formulations are presented, a strong and a weak one, that build a reduced basis accurately representing the latent space.
We show the efficiency of our formulations by using them for tasks and comparing the results to other autoencoders.
arXiv Detail & Related papers (2024-05-22T20:33:09Z) - Triple-Encoders: Representations That Fire Together, Wire Together [51.15206713482718]
Contrastive Learning is a representation learning method that encodes relative distances between utterances into the embedding space via a bi-encoder.
This study introduces triple-encoders, which efficiently compute distributed utterance mixtures from these independently encoded utterances.
We find that triple-encoders lead to a substantial improvement over bi-encoders, and even to better zero-shot generalization than single-vector representation models.
arXiv Detail & Related papers (2024-02-19T18:06:02Z) - Differentiable VQ-VAE's for Robust White Matter Streamline Encodings [33.936125620525]
Autoencoders have been proposed as a dimension-reduction tool to simplify the analysis streamlines in a low-dimensional latent spaces.
We propose a novel Differentiable Vector Quantized Variational Autoencoder, which ingests entire bundles of streamlines as single data-point.
arXiv Detail & Related papers (2023-11-10T17:59:43Z) - Complexity Matters: Rethinking the Latent Space for Generative Modeling [65.64763873078114]
In generative modeling, numerous successful approaches leverage a low-dimensional latent space, e.g., Stable Diffusion.
In this study, we aim to shed light on this under-explored topic by rethinking the latent space from the perspective of model complexity.
arXiv Detail & Related papers (2023-07-17T07:12:29Z) - Think Twice before Driving: Towards Scalable Decoders for End-to-End
Autonomous Driving [74.28510044056706]
Existing methods usually adopt the decoupled encoder-decoder paradigm.
In this work, we aim to alleviate the problem by two principles.
We first predict a coarse-grained future position and action based on the encoder features.
Then, conditioned on the position and action, the future scene is imagined to check the ramification if we drive accordingly.
arXiv Detail & Related papers (2023-05-10T15:22:02Z) - Benign Autoencoders [0.0]
We formalize the problem of finding the optimal encoder-decoder pair and characterize its solution, which we name the "benign autoencoder" (BAE)
We prove that BAE projects data onto a manifold whose dimension is the optimal compressibility dimension of the generative problem.
As an illustration, we show how BAE can find optimal, low-dimensional latent representations that improve the performance of a discriminator under a distribution shift.
arXiv Detail & Related papers (2022-10-02T21:36:27Z) - Reducing Redundancy in the Bottleneck Representation of the Autoencoders [98.78384185493624]
Autoencoders are a type of unsupervised neural networks, which can be used to solve various tasks.
We propose a scheme to explicitly penalize feature redundancies in the bottleneck representation.
We tested our approach across different tasks: dimensionality reduction using three different dataset, image compression using the MNIST dataset, and image denoising using fashion MNIST.
arXiv Detail & Related papers (2022-02-09T18:48:02Z) - Neural Distributed Source Coding [59.630059301226474]
We present a framework for lossy DSC that is agnostic to the correlation structure and can scale to high dimensions.
We evaluate our method on multiple datasets and show that our method can handle complex correlations and state-of-the-art PSNR.
arXiv Detail & Related papers (2021-06-05T04:50:43Z) - Correcting spanning errors with a fractal code [7.6146285961466]
We propose an efficient decoder for the Fibonacci code'; a two-dimensional classical code that mimics the fractal nature of the cubic code.
We perform numerical experiments that show our decoder is robust to one-dimensional, correlated errors.
arXiv Detail & Related papers (2020-02-26T19:00:06Z) - Learning Autoencoders with Relational Regularization [89.53065887608088]
A new framework is proposed for learning autoencoders of data distributions.
We minimize the discrepancy between the model and target distributions, with a emphrelational regularization
We implement the framework with two scalable algorithms, making it applicable for both probabilistic and deterministic autoencoders.
arXiv Detail & Related papers (2020-02-07T17:27:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.