Visualizing and Understanding Contrastive Learning
- URL: http://arxiv.org/abs/2206.09753v3
- Date: Wed, 13 Dec 2023 20:02:00 GMT
- Title: Visualizing and Understanding Contrastive Learning
- Authors: Fawaz Sammani, Boris Joukovsky, Nikos Deligiannis
- Abstract summary: We design visual explanation methods that contribute towards understanding similarity learning tasks from pairs of images.
We also adapt existing metrics, used to evaluate visual explanations of image classification systems, to suit pairs of explanations.
- Score: 22.553990823550784
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Contrastive learning has revolutionized the field of computer vision,
learning rich representations from unlabeled data, which generalize well to
diverse vision tasks. Consequently, it has become increasingly important to
explain these approaches and understand their inner workings mechanisms. Given
that contrastive models are trained with interdependent and interacting inputs
and aim to learn invariance through data augmentation, the existing methods for
explaining single-image systems (e.g., image classification models) are
inadequate as they fail to account for these factors and typically assume
independent inputs. Additionally, there is a lack of evaluation metrics
designed to assess pairs of explanations, and no analytical studies have been
conducted to investigate the effectiveness of different techniques used to
explaining contrastive learning. In this work, we design visual explanation
methods that contribute towards understanding similarity learning tasks from
pairs of images. We further adapt existing metrics, used to evaluate visual
explanations of image classification systems, to suit pairs of explanations and
evaluate our proposed methods with these metrics. Finally, we present a
thorough analysis of visual explainability methods for contrastive learning,
establish their correlation with downstream tasks and demonstrate the potential
of our approaches to investigate their merits and drawbacks.
Related papers
- A Critical Assessment of Interpretable and Explainable Machine Learning for Intrusion Detection [0.0]
We study the use of overly complex and opaque ML models, unaccounted data imbalances and correlated features, inconsistent influential features across different explanation methods, and the implausible utility of explanations.
Specifically, we advise avoiding complex opaque models such as Deep Neural Networks and instead using interpretable ML models such as Decision Trees.
We find that feature-based model explanations are most often inconsistent across different settings.
arXiv Detail & Related papers (2024-07-04T15:35:42Z) - Graphical Perception of Saliency-based Model Explanations [6.936466872687605]
We study the perception of model explanations, specifically, saliency-based explanations for visual recognition models.
Our findings show that factors related to visualization design decisions, the type of alignment, and qualities of the saliency map all play important roles in how humans perceive saliency-based visual explanations.
arXiv Detail & Related papers (2024-06-11T20:29:25Z) - CNN-based explanation ensembling for dataset, representation and explanations evaluation [1.1060425537315088]
We explore the potential of ensembling explanations generated by deep classification models using convolutional model.
Through experimentation and analysis, we aim to investigate the implications of combining explanations to uncover a more coherent and reliable patterns of the model's behavior.
arXiv Detail & Related papers (2024-04-16T08:39:29Z) - Revisiting Self-supervised Learning of Speech Representation from a
Mutual Information Perspective [68.20531518525273]
We take a closer look into existing self-supervised methods of speech from an information-theoretic perspective.
We use linear probes to estimate the mutual information between the target information and learned representations.
We explore the potential of evaluating representations in a self-supervised fashion, where we estimate the mutual information between different parts of the data without using any labels.
arXiv Detail & Related papers (2024-01-16T21:13:22Z) - Evaluation of Self-taught Learning-based Representations for Facial
Emotion Recognition [62.30451764345482]
This work describes different strategies to generate unsupervised representations obtained through the concept of self-taught learning for facial emotion recognition.
The idea is to create complementary representations promoting diversity by varying the autoencoders' initialization, architecture, and training data.
Experimental results on Jaffe and Cohn-Kanade datasets using a leave-one-subject-out protocol show that FER methods based on the proposed diverse representations compare favorably against state-of-the-art approaches.
arXiv Detail & Related papers (2022-04-26T22:48:15Z) - TorchEsegeta: Framework for Interpretability and Explainability of
Image-based Deep Learning Models [0.0]
Clinicians are often sceptical about applying automatic image processing approaches, especially deep learning based methods, in practice.
This paper presents approaches that help to interpret and explain the results of deep learning algorithms by depicting the anatomical areas which influence the decision of the algorithm most.
Research presents a unified framework, TorchEsegeta, for applying various interpretability and explainability techniques for deep learning models.
arXiv Detail & Related papers (2021-10-16T01:00:15Z) - Discriminative Attribution from Counterfactuals [64.94009515033984]
We present a method for neural network interpretability by combining feature attribution with counterfactual explanations.
We show that this method can be used to quantitatively evaluate the performance of feature attribution methods in an objective manner.
arXiv Detail & Related papers (2021-09-28T00:53:34Z) - This is not the Texture you are looking for! Introducing Novel
Counterfactual Explanations for Non-Experts using Generative Adversarial
Learning [59.17685450892182]
counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image.
We present a novel approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques.
Our results show that our approach leads to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems.
arXiv Detail & Related papers (2020-12-22T10:08:05Z) - Explainable Recommender Systems via Resolving Learning Representations [57.24565012731325]
Explanations could help improve user experience and discover system defects.
We propose a novel explainable recommendation model through improving the transparency of the representation learning process.
arXiv Detail & Related papers (2020-08-21T05:30:48Z) - Self-supervised Learning from a Multi-view Perspective [121.63655399591681]
We show that self-supervised representations can extract task-relevant information and discard task-irrelevant information.
Our theoretical framework paves the way to a larger space of self-supervised learning objective design.
arXiv Detail & Related papers (2020-06-10T00:21:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.