Sentiment Analysis with Deep Learning Models: A Comparative Study on a
Decade of Sinhala Language Facebook Data
- URL: http://arxiv.org/abs/2201.03941v1
- Date: Tue, 11 Jan 2022 13:31:15 GMT
- Title: Sentiment Analysis with Deep Learning Models: A Comparative Study on a
Decade of Sinhala Language Facebook Data
- Authors: Gihan Weeraprameshwara, Vihanga Jayawickrama, Nisansa de Silva,
Yudhanjaya Wijeratne
- Abstract summary: Bidirectional LSTM model achieves an F1 score of 84.58% for Sinhala sentiment analysis.
We conclude that it is safe to claim that Facebook reactions are suitable to predict the sentiment of a text.
- Score: 0.41783829807634776
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The relationship between Facebook posts and the corresponding reaction
feature is an interesting subject to explore and understand. To archive this
end, we test state-of-the-art Sinhala sentiment analysis models against a data
set containing a decade worth of Sinhala posts with millions of reactions. For
the purpose of establishing benchmarks and with the goal of identifying the
best model for Sinhala sentiment analysis, we also test, on the same data set
configuration, other deep learning models catered for sentiment analysis. In
this study we report that the 3 layer Bidirectional LSTM model achieves an F1
score of 84.58% for Sinhala sentiment analysis, surpassing the current
state-of-the-art model; Capsule B, which only manages to get an F1 score of
82.04%. Further, since all the deep learning models show F1 scores above 75% we
conclude that it is safe to claim that Facebook reactions are suitable to
predict the sentiment of a text.
Related papers
- HyperspectralViTs: General Hyperspectral Models for On-board Remote Sensing [21.192836739734435]
On-board processing of hyperspectral data with machine learning models would enable unprecedented amount of autonomy for a wide range of tasks.
This can enable early warning system and could allow new capabilities such as automated scheduling across constellations of satellites.
We propose fast and accurate machine learning architectures which support end-to-end training with data of high spectral dimension.
arXiv Detail & Related papers (2024-10-22T17:59:55Z) - Implicit Sentiment Analysis Based on Chain of Thought Prompting [1.4582633500696451]
This paper introduces a Sentiment Analysis of Thinking (SAoT) framework.
The framework first analyzes the implicit aspects and opinions in the text using common sense and thinking chain capabilities.
The model is evaluated on the SemEval 2014 dataset, consisting of 1120 restaurant reviews and 638 laptop reviews.
arXiv Detail & Related papers (2024-08-22T06:55:29Z) - Text Sentiment Analysis and Classification Based on Bidirectional Gated Recurrent Units (GRUs) Model [6.096738978232722]
This paper explores the importance of text sentiment analysis and classification in the field of natural language processing.
It proposes a new approach to sentiment analysis and classification based on the bidirectional gated recurrent units (GRUs) model.
arXiv Detail & Related papers (2024-04-26T02:40:03Z) - Learning from Models and Data for Visual Grounding [55.21937116752679]
We introduce SynGround, a framework that combines data-driven learning and knowledge transfer from various large-scale pretrained models.
We finetune a pretrained vision-and-language model on this dataset by optimizing a mask-attention objective.
The resulting model improves the grounding capabilities of an off-the-shelf vision-and-language model.
arXiv Detail & Related papers (2024-03-20T17:59:43Z) - A Comprehensive Evaluation and Analysis Study for Chinese Spelling Check [53.152011258252315]
We show that using phonetic and graphic information reasonably is effective for Chinese Spelling Check.
Models are sensitive to the error distribution of the test set, which reflects the shortcomings of models.
The commonly used benchmark, SIGHAN, can not reliably evaluate models' performance.
arXiv Detail & Related papers (2023-07-25T17:02:38Z) - Preserving Knowledge Invariance: Rethinking Robustness Evaluation of
Open Information Extraction [50.62245481416744]
We present the first benchmark that simulates the evaluation of open information extraction models in the real world.
We design and annotate a large-scale testbed in which each example is a knowledge-invariant clique.
By further elaborating the robustness metric, a model is judged to be robust if its performance is consistently accurate on the overall cliques.
arXiv Detail & Related papers (2023-05-23T12:05:09Z) - End-to-End Zero-Shot HOI Detection via Vision and Language Knowledge
Distillation [86.41437210485932]
We aim at advancing zero-shot HOI detection to detect both seen and unseen HOIs simultaneously.
We propose a novel end-to-end zero-shot HOI Detection framework via vision-language knowledge distillation.
Our method outperforms the previous SOTA by 8.92% on unseen mAP and 10.18% on overall mAP.
arXiv Detail & Related papers (2022-04-01T07:27:19Z) - A Multi-Level Attention Model for Evidence-Based Fact Checking [58.95413968110558]
We present a simple model that can be trained on sequence structures.
Results on a large-scale dataset for Fact Extraction and VERification show that our model outperforms the graph-based approaches.
arXiv Detail & Related papers (2021-06-02T05:40:12Z) - Tasty Burgers, Soggy Fries: Probing Aspect Robustness in Aspect-Based
Sentiment Analysis [71.40390724765903]
Aspect-based sentiment analysis (ABSA) aims to predict the sentiment towards a specific aspect in the text.
Existing ABSA test sets cannot be used to probe whether a model can distinguish the sentiment of the target aspect from the non-target aspects.
We generate new examples to disentangle the confounding sentiments of the non-target aspects from the target aspect's sentiment.
arXiv Detail & Related papers (2020-09-16T22:38:18Z) - FiSSA at SemEval-2020 Task 9: Fine-tuned For Feelings [2.362412515574206]
In this paper, we present our approach for sentiment classification on Spanish-English code-mixed social media data.
We explore both monolingual and multilingual models with the standard fine-tuning method.
Although two-step fine-tuning improves sentiment classification performance over the base model, the large multilingual XLM-RoBERTa model achieves best weighted F1-score.
arXiv Detail & Related papers (2020-07-24T14:48:27Z) - Gestalt: a Stacking Ensemble for SQuAD2.0 [0.0]
We propose a deep-learning system that finds, or indicates the lack of, a correct answer to a question in a context paragraph.
Our goal is to learn an ensemble of heterogeneous SQuAD2.0 models that outperforms the best model in the ensemble per se.
arXiv Detail & Related papers (2020-04-02T08:09:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.