Pointwise Representational Similarity
- URL: http://arxiv.org/abs/2305.19294v1
- Date: Tue, 30 May 2023 09:40:08 GMT
- Title: Pointwise Representational Similarity
- Authors: Camila Kolling, Till Speicher, Vedant Nanda, Mariya Toneva, Krishna P.
Gummadi
- Abstract summary: Pointwise Normalized Kernel Alignment (PNKA) is a measure that quantifies how similarly an individual input is represented in two representation spaces.
We show how PNKA can be leveraged to develop a deeper understanding of (a) the input examples that are likely to be misclassified, (b) the concepts encoded by (individual) neurons in a layer, and (c) the effects of fairness interventions on learned representations.
- Score: 14.22332335495585
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the increasing reliance on deep neural networks, it is important to
develop ways to better understand their learned representations. Representation
similarity measures have emerged as a popular tool for examining learned
representations However, existing measures only provide aggregate estimates of
similarity at a global level, i.e. over a set of representations for N input
examples. As such, these measures are not well-suited for investigating
representations at a local level, i.e. representations of a single input
example. Local similarity measures are needed, for instance, to understand
which individual input representations are affected by training interventions
to models (e.g. to be more fair and unbiased) or are at greater risk of being
misclassified. In this work, we fill in this gap and propose Pointwise
Normalized Kernel Alignment (PNKA), a measure that quantifies how similarly an
individual input is represented in two representation spaces. Intuitively, PNKA
compares the similarity of an input's neighborhoods across both spaces. Using
our measure, we are able to analyze properties of learned representations at a
finer granularity than what was previously possible. Concretely, we show how
PNKA can be leveraged to develop a deeper understanding of (a) the input
examples that are likely to be misclassified, (b) the concepts encoded by
(individual) neurons in a layer, and (c) the effects of fairness interventions
on learned representations.
Related papers
- ALVIN: Active Learning Via INterpolation [44.410677121415695]
Active Learning Via INterpolation (ALVIN) conducts intra-class generalizations between examples from under-represented and well-represented groups.
ALVIN identifies informative examples exposing the model to regions of the representation space that counteract the influence of shortcuts.
Experimental results on six datasets encompassing sentiment analysis, natural language inference, and paraphrase detection demonstrate that ALVIN outperforms state-of-the-art active learning methods.
arXiv Detail & Related papers (2024-10-11T16:44:39Z) - Weighted Point Cloud Embedding for Multimodal Contrastive Learning Toward Optimal Similarity Metric [44.95433989446052]
We show the benefit of our proposed method through a new understanding of the contrastive loss of CLIP.
We show that our proposed similarity based on weighted point clouds consistently achieves the optimal similarity.
arXiv Detail & Related papers (2024-04-30T03:15:04Z) - Towards out-of-distribution generalization in large-scale astronomical
surveys: robust networks learn similar representations [3.653721769378018]
We use Centered Kernel Alignment (CKA), a similarity measure metric of neural network representations, to examine the relationship between representation similarity and performance.
We find that when models are robust to a distribution shift, they produce substantially different representations across their layers on OOD data.
We discuss the potential application of similarity representation in guiding model design, training strategy, and mitigating the OOD problem by incorporating CKA as an inductive bias during training.
arXiv Detail & Related papers (2023-11-29T19:00:05Z) - Beyond Instance Discrimination: Relation-aware Contrastive
Self-supervised Learning [75.46664770669949]
We present relation-aware contrastive self-supervised learning (ReCo) to integrate instance relations.
Our ReCo consistently gains remarkable performance improvements.
arXiv Detail & Related papers (2022-11-02T03:25:28Z) - Measuring the Interpretability of Unsupervised Representations via
Quantized Reverse Probing [97.70862116338554]
We investigate the problem of measuring interpretability of self-supervised representations.
We formulate the latter as estimating the mutual information between the representation and a space of manually labelled concepts.
We use our method to evaluate a large number of self-supervised representations, ranking them by interpretability.
arXiv Detail & Related papers (2022-09-07T16:18:50Z) - Not All Instances Contribute Equally: Instance-adaptive Class
Representation Learning for Few-Shot Visual Recognition [94.04041301504567]
Few-shot visual recognition refers to recognize novel visual concepts from a few labeled instances.
We propose a novel metric-based meta-learning framework termed instance-adaptive class representation learning network (ICRL-Net) for few-shot visual recognition.
arXiv Detail & Related papers (2022-09-07T10:00:18Z) - Contrastive Learning for Fair Representations [50.95604482330149]
Trained classification models can unintentionally lead to biased representations and predictions.
Existing debiasing methods for classification models, such as adversarial training, are often expensive to train and difficult to optimise.
We propose a method for mitigating bias by incorporating contrastive learning, in which instances sharing the same class label are encouraged to have similar representations.
arXiv Detail & Related papers (2021-09-22T10:47:51Z) - Instance Similarity Learning for Unsupervised Feature Representation [83.31011038813459]
We propose an instance similarity learning (ISL) method for unsupervised feature representation.
We employ the Generative Adversarial Networks (GAN) to mine the underlying feature manifold.
Experiments on image classification demonstrate the superiority of our method compared with the state-of-the-art methods.
arXiv Detail & Related papers (2021-08-05T16:42:06Z) - Toward Scalable and Unified Example-based Explanation and Outlier
Detection [128.23117182137418]
We argue for a broader adoption of prototype-based student networks capable of providing an example-based explanation for their prediction.
We show that our prototype-based networks beyond similarity kernels deliver meaningful explanations and promising outlier detection results without compromising classification accuracy.
arXiv Detail & Related papers (2020-11-11T05:58:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.