Non-Compositionality in Sentiment: New Data and Analyses
- URL: http://arxiv.org/abs/2310.20656v1
- Date: Tue, 31 Oct 2023 17:25:07 GMT
- Title: Non-Compositionality in Sentiment: New Data and Analyses
- Authors: Verna Dankers and Christopher G. Lucas
- Abstract summary: Many NLP studies on sentiment analysis focus on the fact that sentiment computations are largely compositional.
We, instead, set out to obtain non-compositionality ratings for phrases with respect to their sentiment.
- Score: 11.43037731719907
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: When natural language phrases are combined, their meaning is often more than
the sum of their parts. In the context of NLP tasks such as sentiment analysis,
where the meaning of a phrase is its sentiment, that still applies. Many NLP
studies on sentiment analysis, however, focus on the fact that sentiment
computations are largely compositional. We, instead, set out to obtain
non-compositionality ratings for phrases with respect to their sentiment. Our
contributions are as follows: a) a methodology for obtaining those
non-compositionality ratings, b) a resource of ratings for 259 phrases --
NonCompSST -- along with an analysis of that resource, and c) an evaluation of
computational models for sentiment analysis using this new resource.
Related papers
- You Shall Know a Tool by the Traces it Leaves: The Predictability of Sentiment Analysis Tools [74.98850427240464]
We show that sentiment analysis tools disagree on the same dataset.
We show that the sentiment tool used for sentiment annotation can even be predicted from its outcome.
arXiv Detail & Related papers (2024-10-18T17:27:38Z) - Exploring the Correlation between Human and Machine Evaluation of Simultaneous Speech Translation [0.9576327614980397]
This study aims to assess the reliability of automatic metrics in evaluating simultaneous interpretations by analyzing their correlation with human evaluations.
As a benchmark we use human assessments performed by language experts, and evaluate how well sentence embeddings and Large Language Models correlate with them.
The results suggest GPT models, particularly GPT-3.5 with direct prompting, demonstrate the strongest correlation with human judgment in terms of semantic similarity between source and target texts.
arXiv Detail & Related papers (2024-06-14T14:47:19Z) - SOUL: Towards Sentiment and Opinion Understanding of Language [96.74878032417054]
We propose a new task called Sentiment and Opinion Understanding of Language (SOUL)
SOUL aims to evaluate sentiment understanding through two subtasks: Review (RC) and Justification Generation (JG)
arXiv Detail & Related papers (2023-10-27T06:48:48Z) - A Semantic Approach to Negation Detection and Word Disambiguation with
Natural Language Processing [1.0499611180329804]
This study aims to demonstrate the methods for detecting negations in a sentence by uniquely evaluating the lexical structure of the text.
The proposed method examined all the unique features of the related expressions within a text to resolve the contextual usage of the sentence.
arXiv Detail & Related papers (2023-02-05T03:58:45Z) - Sentiment analysis and opinion mining on E-commerce site [0.0]
The goal of this study is to solve the sentiment polarity classification challenges in sentiment analysis.
A broad technique for categorizing sentiment opposition is presented, along with comprehensive process explanations.
arXiv Detail & Related papers (2022-11-28T16:43:33Z) - Are Representations Built from the Ground Up? An Empirical Examination
of Local Composition in Language Models [91.3755431537592]
Representing compositional and non-compositional phrases is critical for language understanding.
We first formulate a problem of predicting the LM-internal representations of longer phrases given those of their constituents.
While we would expect the predictive accuracy to correlate with human judgments of semantic compositionality, we find this is largely not the case.
arXiv Detail & Related papers (2022-10-07T14:21:30Z) - A Weak Supervised Dataset of Fine-Grained Emotions in Portuguese [0.0]
This research describes an approach to create a lexical-based weak supervised corpus for fine-grained emotion in Portuguese.
Our results suggest lexical-based weak supervision as an appropriate strategy for initial work in low resources environment.
arXiv Detail & Related papers (2021-08-17T14:08:23Z) - Weakly-Supervised Aspect-Based Sentiment Analysis via Joint
Aspect-Sentiment Topic Embedding [71.2260967797055]
We propose a weakly-supervised approach for aspect-based sentiment analysis.
We learn sentiment, aspect> joint topic embeddings in the word embedding space.
We then use neural models to generalize the word-level discriminative information.
arXiv Detail & Related papers (2020-10-13T21:33:24Z) - Pareto Probing: Trading Off Accuracy for Complexity [87.09294772742737]
We argue for a probe metric that reflects the fundamental trade-off between probe complexity and performance.
Our experiments with dependency parsing reveal a wide gap in syntactic knowledge between contextual and non-contextual representations.
arXiv Detail & Related papers (2020-10-05T17:27:31Z) - A Survey of Unsupervised Dependency Parsing [62.16714720135358]
Unsupervised dependency parsing aims to learn a dependency from sentences that have no annotation of their correct parse trees.
Despite its difficulty, unsupervised parsing is an interesting research direction because of its capability of utilizing almost unlimited unannotated text data.
arXiv Detail & Related papers (2020-10-04T10:51:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.