Does BERT Understand Sentiment? Leveraging Comparisons Between
Contextual and Non-Contextual Embeddings to Improve Aspect-Based Sentiment
Models
- URL: http://arxiv.org/abs/2011.11673v1
- Date: Mon, 23 Nov 2020 19:12:31 GMT
- Title: Does BERT Understand Sentiment? Leveraging Comparisons Between
Contextual and Non-Contextual Embeddings to Improve Aspect-Based Sentiment
Models
- Authors: Natesh Reddy, Pranaydeep Singh, Muktabh Mayank Srivastava
- Abstract summary: We show that training a comparison of a contextual embedding from BERT and a generic word embedding can be used to infer sentiment.
We also show that if we finetune a subset of weights the model built on comparison of BERT and generic word embedding, it can get state of the art results for Polarity Detection in Aspect Based Sentiment Classification datasets.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: When performing Polarity Detection for different words in a sentence, we need
to look at the words around to understand the sentiment. Massively pretrained
language models like BERT can encode not only just the words in a document but
also the context around the words along with them. This begs the questions,
"Does a pretrain language model also automatically encode sentiment information
about each word?" and "Can it be used to infer polarity towards different
aspects?". In this work we try to answer this question by showing that training
a comparison of a contextual embedding from BERT and a generic word embedding
can be used to infer sentiment. We also show that if we finetune a subset of
weights the model built on comparison of BERT and generic word embedding, it
can get state of the art results for Polarity Detection in Aspect Based
Sentiment Classification datasets.
Related papers
- Syntax and Semantics Meet in the "Middle": Probing the Syntax-Semantics
Interface of LMs Through Agentivity [68.8204255655161]
We present the semantic notion of agentivity as a case study for probing such interactions.
This suggests LMs may potentially serve as more useful tools for linguistic annotation, theory testing, and discovery.
arXiv Detail & Related papers (2023-05-29T16:24:01Z) - Relational Sentence Embedding for Flexible Semantic Matching [86.21393054423355]
We present Sentence Embedding (RSE), a new paradigm to discover further the potential of sentence embeddings.
RSE is effective and flexible in modeling sentence relations and outperforms a series of state-of-the-art embedding methods.
arXiv Detail & Related papers (2022-12-17T05:25:17Z) - Sentiment-Aware Word and Sentence Level Pre-training for Sentiment
Analysis [64.70116276295609]
SentiWSP is a Sentiment-aware pre-trained language model with combined Word-level and Sentence-level Pre-training tasks.
SentiWSP achieves new state-of-the-art performance on various sentence-level and aspect-level sentiment classification benchmarks.
arXiv Detail & Related papers (2022-10-18T12:25:29Z) - Representing Affect Information in Word Embeddings [5.378735006566249]
We investigated whether and how the affect meaning of a word is encoded in word embeddings pre-trained in large neural networks.
The embeddings varied in being static or contextualized, and how much affect specific information was prioritized during the pre-training and fine-tuning phase.
arXiv Detail & Related papers (2022-09-21T18:16:33Z) - Lost in Context? On the Sense-wise Variance of Contextualized Word
Embeddings [11.475144702935568]
We quantify how much the contextualized embeddings of each word sense vary across contexts in typical pre-trained models.
We find that word representations are position-biased, where the first words in different contexts tend to be more similar.
arXiv Detail & Related papers (2022-08-20T12:27:25Z) - Using Paraphrases to Study Properties of Contextual Embeddings [46.84861591608146]
We use paraphrases as a unique source of data to analyze contextualized embeddings.
Because paraphrases naturally encode consistent word and phrase semantics, they provide a unique lens for investigating properties of embeddings.
We find that contextual embeddings effectively handle polysemous words, but give synonyms surprisingly different representations in many cases.
arXiv Detail & Related papers (2022-07-12T14:22:05Z) - Frequency-based Distortions in Contextualized Word Embeddings [29.88883761339757]
This work explores the geometric characteristics of contextualized word embeddings with two novel tools.
Words of high and low frequency differ significantly with respect to their representational geometry.
BERT-Base has more trouble differentiating between South American and African countries than North American and European ones.
arXiv Detail & Related papers (2021-04-17T06:35:48Z) - Deriving Contextualised Semantic Features from BERT (and Other
Transformer Model) Embeddings [0.0]
This paper demonstrates that Binder features can be derived from the BERT embedding space.
It provides contextualised Binder embeddings, which can aid in understanding semantic differences between words in context.
It additionally provides insights into how semantic features are represented across the different layers of the BERT model.
arXiv Detail & Related papers (2020-12-30T22:52:29Z) - On the Sentence Embeddings from Pre-trained Language Models [78.45172445684126]
In this paper, we argue that the semantic information in the BERT embeddings is not fully exploited.
We find that BERT always induces a non-smooth anisotropic semantic space of sentences, which harms its performance of semantic similarity.
We propose to transform the anisotropic sentence embedding distribution to a smooth and isotropic Gaussian distribution through normalizing flows that are learned with an unsupervised objective.
arXiv Detail & Related papers (2020-11-02T13:14:57Z) - A Deep Neural Framework for Contextual Affect Detection [51.378225388679425]
A short and simple text carrying no emotion can represent some strong emotions when reading along with its context.
We propose a Contextual Affect Detection framework which learns the inter-dependence of words in a sentence.
arXiv Detail & Related papers (2020-01-28T05:03:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.