Augment to Interpret: Unsupervised and Inherently Interpretable Graph
Embeddings
- URL: http://arxiv.org/abs/2309.16564v1
- Date: Thu, 28 Sep 2023 16:21:40 GMT
- Title: Augment to Interpret: Unsupervised and Inherently Interpretable Graph
Embeddings
- Authors: Gregory Scafarto and Madalina Ciortan and Simon Tihon and Quentin
Ferre
- Abstract summary: In this paper, we study graph representation learning and we show that data augmentation that preserves semantics can be learned and used to produce interpretations.
Our framework, which we named INGENIOUS, creates inherently interpretable embeddings and eliminates the need for costly additional post-hoc analysis.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unsupervised learning allows us to leverage unlabelled data, which has become
abundantly available, and to create embeddings that are usable on a variety of
downstream tasks. However, the typical lack of interpretability of unsupervised
representation learning has become a limiting factor with regard to recent
transparent-AI regulations. In this paper, we study graph representation
learning and we show that data augmentation that preserves semantics can be
learned and used to produce interpretations. Our framework, which we named
INGENIOUS, creates inherently interpretable embeddings and eliminates the need
for costly additional post-hoc analysis. We also introduce additional metrics
addressing the lack of formalism and metrics in the understudied area of
unsupervised-representation learning interpretability. Our results are
supported by an experimental study applied to both graph-level and node-level
tasks and show that interpretable embeddings provide state-of-the-art
performance on subsequent downstream tasks.
Related papers
- Disentangled and Self-Explainable Node Representation Learning [1.4002424249260854]
We introduce DiSeNE, a framework that generates self-explainable embeddings in an unsupervised manner.
Our method employs disentangled representation learning to produce dimension-wise interpretable embeddings.
We formalize novel desiderata for disentangled and interpretable embeddings, which drive our new objective functions.
arXiv Detail & Related papers (2024-10-28T13:58:52Z) - Understanding Self-Supervised Learning of Speech Representation via
Invariance and Redundancy Reduction [0.45060992929802207]
Self-supervised learning (SSL) has emerged as a promising paradigm for learning flexible speech representations from unlabeled data.
This study provides an empirical analysis of Barlow Twins (BT), an SSL technique inspired by theories of redundancy reduction in human perception.
arXiv Detail & Related papers (2023-09-07T10:23:59Z) - Towards Explanation for Unsupervised Graph-Level Representation Learning [108.31036962735911]
Existing explanation methods focus on the supervised settings, eg, node classification and graph classification, while the explanation for unsupervised graph-level representation learning is still unexplored.
In this paper, we advance the Information Bottleneck principle (IB) to tackle the proposed explanation problem for unsupervised graph representations, which leads to a novel principle, textitUnsupervised Subgraph Information Bottleneck (USIB)
We also theoretically analyze the connection between graph representations and explanatory subgraphs on the label space, which reveals that the robustness of representations benefit the fidelity of explanatory subgraphs.
arXiv Detail & Related papers (2022-05-20T02:50:15Z) - Explaining, Evaluating and Enhancing Neural Networks' Learned
Representations [2.1485350418225244]
We show how explainability can be an aid, rather than an obstacle, towards better and more efficient representations.
We employ such attributions to define two novel scores for evaluating the informativeness and the disentanglement of latent embeddings.
We show that adopting our proposed scores as constraints during the training of a representation learning task improves the downstream performance of the model.
arXiv Detail & Related papers (2022-02-18T19:00:01Z) - Unbiased Graph Embedding with Biased Graph Observations [52.82841737832561]
We propose a principled new way for obtaining unbiased representations by learning from an underlying bias-free graph.
Based on this new perspective, we propose two complementary methods for uncovering such an underlying graph.
arXiv Detail & Related papers (2021-10-26T18:44:37Z) - Rethinking the Representational Continuity: Towards Unsupervised
Continual Learning [45.440192267157094]
Unsupervised continual learning (UCL) aims to learn a sequence of tasks without forgetting the previously acquired knowledge.
We show that reliance on annotated data is not necessary for continual learning.
We propose Lifelong Unsupervised Mixup (LUMP) to alleviate catastrophic forgetting for unsupervised representations.
arXiv Detail & Related papers (2021-10-13T18:38:06Z) - Desiderata for Representation Learning: A Causal Perspective [104.3711759578494]
We take a causal perspective on representation learning, formalizing non-spuriousness and efficiency (in supervised representation learning) and disentanglement (in unsupervised representation learning)
This yields computable metrics that can be used to assess the degree to which representations satisfy the desiderata of interest and learn non-spurious and disentangled representations from single observational datasets.
arXiv Detail & Related papers (2021-09-08T17:33:54Z) - A Tutorial on Learning Disentangled Representations in the Imaging
Domain [13.320565017546985]
Disentangled representation learning has been proposed as an approach to learning general representations.
A good general representation can be readily fine-tuned for new target tasks using modest amounts of data.
Disentangled representations can offer model explainability and can help us understand the underlying causal relations of the factors of variation.
arXiv Detail & Related papers (2021-08-26T21:44:10Z) - Disambiguation of weak supervision with exponential convergence rates [88.99819200562784]
In supervised learning, data are annotated with incomplete yet discriminative information.
In this paper, we focus on partial labelling, an instance of weak supervision where, from a given input, we are given a set of potential targets.
We propose an empirical disambiguation algorithm to recover full supervision from weak supervision.
arXiv Detail & Related papers (2021-02-04T18:14:32Z) - A Sober Look at the Unsupervised Learning of Disentangled
Representations and their Evaluation [63.042651834453544]
We show that the unsupervised learning of disentangled representations is impossible without inductive biases on both the models and the data.
We observe that while the different methods successfully enforce properties "encouraged" by the corresponding losses, well-disentangled models seemingly cannot be identified without supervision.
Our results suggest that future work on disentanglement learning should be explicit about the role of inductive biases and (implicit) supervision.
arXiv Detail & Related papers (2020-10-27T10:17:15Z) - A Commentary on the Unsupervised Learning of Disentangled
Representations [63.042651834453544]
The goal of the unsupervised learning of disentangled representations is to separate the independent explanatory factors of variation in the data without access to supervision.
We discuss the theoretical result showing that the unsupervised learning of disentangled representations is fundamentally impossible without inductive biases.
arXiv Detail & Related papers (2020-07-28T13:13:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.