Supervised Contrastive Learning for Affect Modelling
- URL: http://arxiv.org/abs/2208.12238v1
- Date: Thu, 25 Aug 2022 17:40:19 GMT
- Title: Supervised Contrastive Learning for Affect Modelling
- Authors: Kosmas Pinitas, Konstantinos Makantasis, Antonios Liapis, Georgios N.
Yannakakis
- Abstract summary: We introduce three different supervised contrastive learning approaches for training representations that consider affect information.
Results demonstrate the representation capacity of contrastive learning and its efficiency in boosting the accuracy of affect models.
- Score: 2.570570340104555
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Affect modeling is viewed, traditionally, as the process of mapping
measurable affect manifestations from multiple modalities of user input to
affect labels. That mapping is usually inferred through end-to-end
(manifestation-to-affect) machine learning processes. What if, instead, one
trains general, subject-invariant representations that consider affect
information and then uses such representations to model affect? In this paper
we assume that affect labels form an integral part, and not just the training
signal, of an affect representation and we explore how the recent paradigm of
contrastive learning can be employed to discover general high-level
affect-infused representations for the purpose of modeling affect. We introduce
three different supervised contrastive learning approaches for training
representations that consider affect information. In this initial study we test
the proposed methods for arousal prediction in the RECOLA dataset based on user
information from multiple modalities. Results demonstrate the representation
capacity of contrastive learning and its efficiency in boosting the accuracy of
affect models. Beyond their evidenced higher performance compared to end-to-end
arousal classification, the resulting representations are general-purpose and
subject-agnostic, as training is guided though general affect information
available in any multimodal corpus.
Related papers
- Separating common from salient patterns with Contrastive Representation
Learning [2.250968907999846]
Contrastive Analysis aims at separating common factors of variation between two datasets.
Current models based on Variational Auto-Encoders have shown poor performance in learning semantically-expressive representations.
We propose to leverage the ability of Contrastive Learning to learn semantically expressive representations well adapted for Contrastive Analysis.
arXiv Detail & Related papers (2024-02-19T08:17:13Z) - What Makes Pre-Trained Visual Representations Successful for Robust
Manipulation? [57.92924256181857]
We find that visual representations designed for manipulation and control tasks do not necessarily generalize under subtle changes in lighting and scene texture.
We find that emergent segmentation ability is a strong predictor of out-of-distribution generalization among ViT models.
arXiv Detail & Related papers (2023-11-03T18:09:08Z) - The Influences of Color and Shape Features in Visual Contrastive
Learning [0.0]
This paper investigates the influences of individual image features (e.g., color and shape) to model performance remain ambiguous.
Experimental results show that compared with supervised representations, contrastive representations tend to cluster with objects of similar color.
arXiv Detail & Related papers (2023-01-29T15:10:14Z) - Task Formulation Matters When Learning Continually: A Case Study in
Visual Question Answering [58.82325933356066]
Continual learning aims to train a model incrementally on a sequence of tasks without forgetting previous knowledge.
We present a detailed study of how different settings affect performance for Visual Question Answering.
arXiv Detail & Related papers (2022-09-30T19:12:58Z) - Generalizable Information Theoretic Causal Representation [37.54158138447033]
We propose to learn causal representation from observational data by regularizing the learning procedure with mutual information measures according to our hypothetical causal graph.
The optimization involves a counterfactual loss, based on which we deduce a theoretical guarantee that the causality-inspired learning is with reduced sample complexity and better generalization ability.
arXiv Detail & Related papers (2022-02-17T00:38:35Z) - Data-Centric Machine Learning in the Legal Domain [0.2624902795082451]
This paper explores how changes in a data set influence the measured performance of a model.
Using three publicly available data sets from the legal domain, we investigate how changes to their size, the train/test splits, and the human labelling accuracy impact the performance.
The observed effects are surprisingly pronounced, especially when the per-class performance is considered.
arXiv Detail & Related papers (2022-01-17T23:05:14Z) - Explainable Recommender Systems via Resolving Learning Representations [57.24565012731325]
Explanations could help improve user experience and discover system defects.
We propose a novel explainable recommendation model through improving the transparency of the representation learning process.
arXiv Detail & Related papers (2020-08-21T05:30:48Z) - Explaining Black Box Predictions and Unveiling Data Artifacts through
Influence Functions [55.660255727031725]
Influence functions explain the decisions of a model by identifying influential training examples.
We conduct a comparison between influence functions and common word-saliency methods on representative tasks.
We develop a new measure based on influence functions that can reveal artifacts in training data.
arXiv Detail & Related papers (2020-05-14T00:45:23Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z) - Fairness by Learning Orthogonal Disentangled Representations [50.82638766862974]
We propose a novel disentanglement approach to invariant representation problem.
We enforce the meaningful representation to be agnostic to sensitive information by entropy.
The proposed approach is evaluated on five publicly available datasets.
arXiv Detail & Related papers (2020-03-12T11:09:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.