Emotions are Subtle: Learning Sentiment Based Text Representations Using
Contrastive Learning
- URL: http://arxiv.org/abs/2112.01054v1
- Date: Thu, 2 Dec 2021 08:29:26 GMT
- Title: Emotions are Subtle: Learning Sentiment Based Text Representations Using
Contrastive Learning
- Authors: Ipsita Mohanty, Ankit Goyal, Alex Dotterweich
- Abstract summary: We extend the use of contrastive learning embeddings to sentiment analysis tasks.
We show that fine-tuning on these embeddings provides an improvement over fine-tuning on BERT-based embeddings.
- Score: 6.6389732792316005
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Contrastive learning techniques have been widely used in the field of
computer vision as a means of augmenting datasets. In this paper, we extend the
use of these contrastive learning embeddings to sentiment analysis tasks and
demonstrate that fine-tuning on these embeddings provides an improvement over
fine-tuning on BERT-based embeddings to achieve higher benchmarks on the task
of sentiment analysis when evaluated on the DynaSent dataset. We also explore
how our fine-tuned models perform on cross-domain benchmark datasets.
Additionally, we explore upsampling techniques to achieve a more balanced class
distribution to make further improvements on our benchmark tasks.
Related papers
- TG-LLaVA: Text Guided LLaVA via Learnable Latent Embeddings [61.9257731511557]
We propose Text Guided LLaVA (TG-LLaVA) to optimize vision-language models (VLMs)
We use learnable latent embeddings as a bridge to analyze textual instruction and add the analysis results to the vision encoder as guidance.
With the guidance of text, the vision encoder can extract text-related features, similar to how humans focus on the most relevant parts of an image when considering a question.
arXiv Detail & Related papers (2024-09-15T00:38:34Z) - Evaluating the Effectiveness of Data Augmentation for Emotion Classification in Low-Resource Settings [1.387446067205368]
We evaluated the effectiveness of different data augmentation techniques for a multi-label emotion classification task using a low-resource dataset.
Back Translation outperformed autoencoder-based approaches and that generating multiple examples per training instance led to further performance improvement.
arXiv Detail & Related papers (2024-06-07T18:13:27Z) - Data Augmentation for Traffic Classification [54.92823760790628]
Data Augmentation (DA) is a technique widely adopted in Computer Vision (CV) and Natural Language Processing (NLP) tasks.
DA has struggled to gain traction in networking contexts, particularly in Traffic Classification (TC) tasks.
arXiv Detail & Related papers (2024-01-19T15:25:09Z) - REDAffectiveLM: Leveraging Affect Enriched Embedding and
Transformer-based Neural Language Model for Readers' Emotion Detection [3.6678641723285446]
We propose a novel approach for Readers' Emotion Detection from short-text documents using a deep learning model called REDAffectiveLM.
We leverage context-specific and affect enriched representations by using a transformer-based pre-trained language model in tandem with affect enriched Bi-LSTM+Attention.
arXiv Detail & Related papers (2023-01-21T19:28:25Z) - Transfer of Representations to Video Label Propagation: Implementation
Factors Matter [31.030799003595522]
We study the impact of important implementation factors in feature extraction and label propagation.
We show that augmenting video-based correspondence cues with still-image-based ones can further improve performance.
We hope that this study will help to improve evaluation practices and better inform future research directions in temporal correspondence.
arXiv Detail & Related papers (2022-03-10T18:58:22Z) - Improving BERT Model Using Contrastive Learning for Biomedical Relation
Extraction [13.354066085659198]
Contrastive learning is not widely utilized in natural language processing due to the lack of a general method of data augmentation for text data.
In this work, we explore the method of employing contrastive learning to improve the text representation from the BERT model for relation extraction.
The experimental results on three relation extraction benchmark datasets demonstrate that our method can improve the BERT model representation and achieve state-of-the-art performance.
arXiv Detail & Related papers (2021-04-28T17:50:24Z) - Weakly Supervised Video Salient Object Detection [79.51227350937721]
We present the first weakly supervised video salient object detection model based on relabeled "fixation guided scribble annotations"
An "Appearance-motion fusion module" and bidirectional ConvLSTM based framework are proposed to achieve effective multi-modal learning and long-term temporal context modeling.
arXiv Detail & Related papers (2021-04-06T09:48:38Z) - Representation Matters: Assessing the Importance of Subgroup Allocations
in Training Data [85.43008636875345]
We show that diverse representation in training data is key to increasing subgroup performances and achieving population level objectives.
Our analysis and experiments describe how dataset compositions influence performance and provide constructive results for using trends in existing data, alongside domain knowledge, to help guide intentional, objective-aware dataset design.
arXiv Detail & Related papers (2021-03-05T00:27:08Z) - Negative Data Augmentation [127.28042046152954]
We show that negative data augmentation samples provide information on the support of the data distribution.
We introduce a new GAN training objective where we use NDA as an additional source of synthetic data for the discriminator.
Empirically, models trained with our method achieve improved conditional/unconditional image generation along with improved anomaly detection capabilities.
arXiv Detail & Related papers (2021-02-09T20:28:35Z) - DAGA: Data Augmentation with a Generation Approach for Low-resource
Tagging Tasks [88.62288327934499]
We propose a novel augmentation method with language models trained on the linearized labeled sentences.
Our method is applicable to both supervised and semi-supervised settings.
arXiv Detail & Related papers (2020-11-03T07:49:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.