A SentiWordNet Strategy for Curriculum Learning in Sentiment Analysis
- URL: http://arxiv.org/abs/2005.04749v2
- Date: Tue, 21 Jul 2020 19:47:35 GMT
- Title: A SentiWordNet Strategy for Curriculum Learning in Sentiment Analysis
- Authors: Vijjini Anvesh Rao, Kaveri Anuranjana and Radhika Mamidi
- Abstract summary: Curriculum Learning (CL) is the idea that learning on a training set sequenced or ordered in a manner where samples range from easy to difficult.
In this paper, we apply the ideas of curriculum learning, driven by SentiWordNet in a sentiment analysis setting.
- Score: 7.562843347215284
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Curriculum Learning (CL) is the idea that learning on a training set
sequenced or ordered in a manner where samples range from easy to difficult,
results in an increment in performance over otherwise random ordering. The idea
parallels cognitive science's theory of how human brains learn, and that
learning a difficult task can be made easier by phrasing it as a sequence of
easy to difficult tasks. This idea has gained a lot of traction in machine
learning and image processing for a while and recently in Natural Language
Processing (NLP). In this paper, we apply the ideas of curriculum learning,
driven by SentiWordNet in a sentiment analysis setting. In this setting, given
a text segment, our aim is to extract its sentiment or polarity. SentiWordNet
is a lexical resource with sentiment polarity annotations. By comparing
performance with other curriculum strategies and with no curriculum, the
effectiveness of the proposed strategy is presented. Convolutional, Recurrence,
and Attention-based architectures are employed to assess this improvement. The
models are evaluated on a standard sentiment dataset, Stanford Sentiment
Treebank.
Related papers
- Learning Beyond Pattern Matching? Assaying Mathematical Understanding in LLMs [58.09253149867228]
This paper assesses the domain knowledge of LLMs through its understanding of different mathematical skills required to solve problems.
Motivated by the use of LLMs as a general scientific assistant, we propose textitNTKEval to assess changes in LLM's probability distribution.
Our systematic analysis finds evidence of domain understanding during in-context learning.
Certain instruction-tuning leads to similar performance changes irrespective of training on different data, suggesting a lack of domain understanding across different skills.
arXiv Detail & Related papers (2024-05-24T12:04:54Z) - VLLMs Provide Better Context for Emotion Understanding Through Common Sense Reasoning [66.23296689828152]
We leverage the capabilities of Vision-and-Large-Language Models to enhance in-context emotion classification.
In the first stage, we propose prompting VLLMs to generate descriptions in natural language of the subject's apparent emotion.
In the second stage, the descriptions are used as contextual information and, along with the image input, are used to train a transformer-based architecture.
arXiv Detail & Related papers (2024-04-10T15:09:15Z) - A Tutorial on the Pretrain-Finetune Paradigm for Natural Language Processing [2.7038841665524846]
The pretrain-finetune paradigm represents a transformative approach in text analysis and natural language processing.
This tutorial offers a comprehensive introduction to the pretrain-finetune paradigm.
arXiv Detail & Related papers (2024-03-04T21:51:11Z) - Human Inspired Progressive Alignment and Comparative Learning for
Grounded Word Acquisition [6.47452771256903]
We take inspiration from how human babies acquire their first language, and developed a computational process for word acquisition through comparative learning.
Motivated by cognitive findings, we generated a small dataset that enables the computation models to compare the similarities and differences of various attributes.
We frame the acquisition of words as not only the information filtration process, but also as representation-symbol mapping.
arXiv Detail & Related papers (2023-07-05T19:38:04Z) - Sentiment-Aware Word and Sentence Level Pre-training for Sentiment
Analysis [64.70116276295609]
SentiWSP is a Sentiment-aware pre-trained language model with combined Word-level and Sentence-level Pre-training tasks.
SentiWSP achieves new state-of-the-art performance on various sentence-level and aspect-level sentiment classification benchmarks.
arXiv Detail & Related papers (2022-10-18T12:25:29Z) - A Weak Supervised Dataset of Fine-Grained Emotions in Portuguese [0.0]
This research describes an approach to create a lexical-based weak supervised corpus for fine-grained emotion in Portuguese.
Our results suggest lexical-based weak supervision as an appropriate strategy for initial work in low resources environment.
arXiv Detail & Related papers (2021-08-17T14:08:23Z) - Subsentence Extraction from Text Using Coverage-Based Deep Learning
Language Models [3.3461339691835277]
We propose a coverage-based sentiment and subsentence extraction system.
The predicted subsentence consists of auxiliary information expressing a sentiment.
Our approach outperforms the state-of-the-art approaches by a large margin in subsentence prediction.
arXiv Detail & Related papers (2021-04-20T06:24:49Z) - Statistical Measures For Defining Curriculum Scoring Function [5.328970912536596]
We show improvements in performance with convolutional and fully-connected neural networks on real image datasets.
Motivated by our insights from implicit curriculum ordering, we introduce a simple curriculum learning strategy.
We also propose and study the performance of a dynamic curriculum learning algorithm.
arXiv Detail & Related papers (2021-02-27T07:25:49Z) - Concept Learners for Few-Shot Learning [76.08585517480807]
We propose COMET, a meta-learning method that improves generalization ability by learning to learn along human-interpretable concept dimensions.
We evaluate our model on few-shot tasks from diverse domains, including fine-grained image classification, document categorization and cell type annotation.
arXiv Detail & Related papers (2020-07-14T22:04:17Z) - On Vocabulary Reliance in Scene Text Recognition [79.21737876442253]
Methods perform well on images with words within vocabulary but generalize poorly to images with words outside vocabulary.
We call this phenomenon "vocabulary reliance"
We propose a simple yet effective mutual learning strategy to allow models of two families to learn collaboratively.
arXiv Detail & Related papers (2020-05-08T11:16:58Z) - Exploring the Limits of Transfer Learning with a Unified Text-to-Text
Transformer [64.22926988297685]
Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP)
In this paper, we explore the landscape of introducing transfer learning techniques for NLP by a unified framework that converts all text-based language problems into a text-to-text format.
arXiv Detail & Related papers (2019-10-23T17:37:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.