Measuring the Interpretability of Unsupervised Representations via
Quantized Reverse Probing
- URL: http://arxiv.org/abs/2209.03268v1
- Date: Wed, 7 Sep 2022 16:18:50 GMT
- Title: Measuring the Interpretability of Unsupervised Representations via
Quantized Reverse Probing
- Authors: Iro Laina, Yuki M. Asano, Andrea Vedaldi
- Abstract summary: We investigate the problem of measuring interpretability of self-supervised representations.
We formulate the latter as estimating the mutual information between the representation and a space of manually labelled concepts.
We use our method to evaluate a large number of self-supervised representations, ranking them by interpretability.
- Score: 97.70862116338554
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Self-supervised visual representation learning has recently attracted
significant research interest. While a common way to evaluate self-supervised
representations is through transfer to various downstream tasks, we instead
investigate the problem of measuring their interpretability, i.e. understanding
the semantics encoded in raw representations. We formulate the latter as
estimating the mutual information between the representation and a space of
manually labelled concepts. To quantify this we introduce a decoding
bottleneck: information must be captured by simple predictors, mapping concepts
to clusters in representation space. This approach, which we call reverse
linear probing, provides a single number sensitive to the semanticity of the
representation. This measure is also able to detect when the representation
contains combinations of concepts (e.g., "red apple") instead of just
individual attributes ("red" and "apple" independently). Finally, we propose to
use supervised classifiers to automatically label large datasets in order to
enrich the space of concepts used for probing. We use our method to evaluate a
large number of self-supervised representations, ranking them by
interpretability, highlight the differences that emerge compared to the
standard evaluation with linear probes and discuss several qualitative
insights. Code at: {\scriptsize{\url{https://github.com/iro-cp/ssl-qrp}}}.
Related papers
- Linking in Style: Understanding learned features in deep learning models [0.0]
Convolutional neural networks (CNNs) learn abstract features to perform object classification.
We propose an automatic method to visualize and systematically analyze learned features in CNNs.
arXiv Detail & Related papers (2024-09-25T12:28:48Z) - Semantics Meets Temporal Correspondence: Self-supervised Object-centric Learning in Videos [63.94040814459116]
Self-supervised methods have shown remarkable progress in learning high-level semantics and low-level temporal correspondence.
We propose a novel semantic-aware masked slot attention on top of the fused semantic features and correspondence maps.
We adopt semantic- and instance-level temporal consistency as self-supervision to encourage temporally coherent object-centric representations.
arXiv Detail & Related papers (2023-08-19T09:12:13Z) - Disentangling Multi-view Representations Beyond Inductive Bias [32.15900989696017]
We propose a novel multi-view representation disentangling method that ensures both interpretability and generalizability of the resulting representations.
Our experiments on four multi-view datasets demonstrate that our proposed method outperforms 12 comparison methods in terms of clustering and classification performance.
arXiv Detail & Related papers (2023-08-03T09:09:28Z) - Learning to Detect Instance-level Salient Objects Using Complementary
Image Labels [55.049347205603304]
We present the first weakly-supervised approach to the salient instance detection problem.
We propose a novel weakly-supervised network with three branches: a Saliency Detection Branch leveraging class consistency information to locate candidate objects; a Boundary Detection Branch exploiting class discrepancy information to delineate object boundaries; and a Centroid Detection Branch using subitizing information to detect salient instance centroids.
arXiv Detail & Related papers (2021-11-19T10:15:22Z) - Scribble-Supervised Semantic Segmentation by Random Walk on Neural
Representation and Self-Supervision on Neural Eigenspace [10.603823180750446]
This work aims to achieve semantic segmentation supervised by scribble label directly without auxiliary information and other intermediate manipulation.
We impose diffusion on neural representation by random walk and consistency on neural eigenspace by self-supervision.
The results demonstrate the superiority of the proposed method and are even comparable to some full-label supervised ones.
arXiv Detail & Related papers (2020-11-11T08:22:25Z) - Learning and Evaluating Representations for Deep One-class
Classification [59.095144932794646]
We present a two-stage framework for deep one-class classification.
We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations.
In experiments, we demonstrate state-of-the-art performance on visual domain one-class classification benchmarks.
arXiv Detail & Related papers (2020-11-04T23:33:41Z) - Quantifying Learnability and Describability of Visual Concepts Emerging
in Representation Learning [91.58529629419135]
We consider how to characterise visual groupings discovered automatically by deep neural networks.
We introduce two concepts, visual learnability and describability, that can be used to quantify the interpretability of arbitrary image groupings.
arXiv Detail & Related papers (2020-10-27T18:41:49Z) - Predicting What You Already Know Helps: Provable Self-Supervised
Learning [60.27658820909876]
Self-supervised representation learning solves auxiliary prediction tasks (known as pretext tasks) without requiring labeled data.
We show a mechanism exploiting the statistical connections between certain em reconstruction-based pretext tasks that guarantee to learn a good representation.
We prove the linear layer yields small approximation error even for complex ground truth function class.
arXiv Detail & Related papers (2020-08-03T17:56:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.