Interpretable Spectral Variational AutoEncoder (ISVAE) for time series
clustering
- URL: http://arxiv.org/abs/2310.11940v1
- Date: Wed, 18 Oct 2023 13:06:05 GMT
- Title: Interpretable Spectral Variational AutoEncoder (ISVAE) for time series
clustering
- Authors: \'Oscar Jim\'enez Rama, Fernando Moreno-Pino, David Ram\'irez, Pablo
M. Olmos
- Abstract summary: We introduce a novel model that incorporates an interpretable bottleneck-termed the Filter Bank (FB)-at the outset of a Variational Autoencoder (VAE)
This arrangement compels the VAE to attend on the most informative segments of the input signal.
By deliberately constraining the VAE with this FB, we promote the development of an encoding that is discernible, separable, and of reduced dimensionality.
- Score: 48.0650332513417
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The best encoding is the one that is interpretable in nature. In this work,
we introduce a novel model that incorporates an interpretable bottleneck-termed
the Filter Bank (FB)-at the outset of a Variational Autoencoder (VAE). This
arrangement compels the VAE to attend on the most informative segments of the
input signal, fostering the learning of a novel encoding ${f_0}$ which boasts
enhanced interpretability and clusterability over traditional latent spaces. By
deliberately constraining the VAE with this FB, we intentionally constrict its
capacity to access broad input domain information, promoting the development of
an encoding that is discernible, separable, and of reduced dimensionality. The
evolutionary learning trajectory of ${f_0}$ further manifests as a dynamic
hierarchical tree, offering profound insights into cluster similarities.
Additionally, for handling intricate data configurations, we propose a tailored
decoder structure that is symmetrically aligned with FB's architecture.
Empirical evaluations highlight the superior efficacy of ISVAE, which compares
favorably to state-of-the-art results in clustering metrics across real-world
datasets.
Related papers
- Gaussian Mixture Vector Quantization with Aggregated Categorical Posterior [5.862123282894087]
We introduce the Vector Quantized Variational Autoencoder (VQ-VAE)
VQ-VAE is a type of variational autoencoder using discrete embedding as latent.
We show that GM-VQ improves codebook utilization and reduces information loss without relying on handcrafteds.
arXiv Detail & Related papers (2024-10-14T05:58:11Z) - Graph-Induced Syntactic-Semantic Spaces in Transformer-Based Variational
AutoEncoders [5.037881619912574]
In this paper, we investigate latent space separation methods for structural syntactic injection in Transformer-based VAEs.
Specifically, we explore how syntactic structures can be leveraged in the encoding stage through the integration of graph-based and sequential models.
Our empirical evaluation, carried out on natural language sentences and mathematical expressions, reveals that the proposed end-to-end VAE architecture can result in a better overall organisation of the latent space.
arXiv Detail & Related papers (2023-11-14T22:47:23Z) - Disentanglement via Latent Quantization [60.37109712033694]
In this work, we construct an inductive bias towards encoding to and decoding from an organized latent space.
We demonstrate the broad applicability of this approach by adding it to both basic data-re (vanilla autoencoder) and latent-reconstructing (InfoGAN) generative models.
arXiv Detail & Related papers (2023-05-28T06:30:29Z) - Information-Ordered Bottlenecks for Adaptive Semantic Compression [0.0]
We present a neural layer designed to adaptively compress data into variables ordered by likelihood.
We show that IOBs achieve near-optimal compression for a given architecture and can assign encoding signals in a manner that is semantically meaningful.
We introduce a novel theory for estimating global dimensionality with IOBs and show that they recover SOTA dimensionality estimates for complex synthetic data.
arXiv Detail & Related papers (2023-05-18T18:00:00Z) - Vector Quantized Wasserstein Auto-Encoder [57.29764749855623]
We study learning deep discrete representations from the generative viewpoint.
We endow discrete distributions over sequences of codewords and learn a deterministic decoder that transports the distribution over the sequences of codewords to the data distribution.
We develop further theories to connect it with the clustering viewpoint of WS distance, allowing us to have a better and more controllable clustering solution.
arXiv Detail & Related papers (2023-02-12T13:51:36Z) - Exploring and Exploiting Multi-Granularity Representations for Machine
Reading Comprehension [13.191437539419681]
We propose a novel approach called Adaptive Bidirectional Attention-Capsule Network (ABA-Net)
ABA-Net adaptively exploits the source representations of different levels to the predictor.
We set the new state-of-the-art performance on the SQuAD 1.0 dataset.
arXiv Detail & Related papers (2022-08-18T10:14:32Z) - Deep clustering with fusion autoencoder [0.0]
Deep clustering (DC) models capitalize on autoencoders to learn intrinsic features which facilitate the clustering process in consequence.
In this paper, a novel DC method is proposed to address this issue. Specifically, the generative adversarial network and VAE are coalesced into a new autoencoder called fusion autoencoder (FAE)
arXiv Detail & Related papers (2022-01-11T07:38:03Z) - Hierarchical Variational Autoencoder for Visual Counterfactuals [79.86967775454316]
Conditional Variational Autos (VAE) are gathering significant attention as an Explainable Artificial Intelligence (XAI) tool.
In this paper we show how relaxing the effect of the posterior leads to successful counterfactuals.
We introduce VAEX an Hierarchical VAE designed for this approach that can visually audit a classifier in applications.
arXiv Detail & Related papers (2021-02-01T14:07:11Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z) - Improve Variational Autoencoder for Text Generationwith Discrete Latent
Bottleneck [52.08901549360262]
Variational autoencoders (VAEs) are essential tools in end-to-end representation learning.
VAEs tend to ignore latent variables with a strong auto-regressive decoder.
We propose a principled approach to enforce an implicit latent feature matching in a more compact latent space.
arXiv Detail & Related papers (2020-04-22T14:41:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.