Sentiment Analysis Based on Deep Learning: A Comparative Study
- URL: http://arxiv.org/abs/2006.03541v1
- Date: Fri, 5 Jun 2020 16:28:10 GMT
- Title: Sentiment Analysis Based on Deep Learning: A Comparative Study
- Authors: Nhan Cach Dang, Mar\'ia N. Moreno-Garc\'ia and Fernando De la Prieta
- Abstract summary: The study of public opinion can provide us with valuable information.
The efficiency and accuracy of sentiment analysis is being hindered by the challenges encountered in natural language processing.
This paper reviews the latest studies that have employed deep learning to solve sentiment analysis problems.
- Score: 69.09570726777817
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The study of public opinion can provide us with valuable information. The
analysis of sentiment on social networks, such as Twitter or Facebook, has
become a powerful means of learning about the users' opinions and has a wide
range of applications. However, the efficiency and accuracy of sentiment
analysis is being hindered by the challenges encountered in natural language
processing (NLP). In recent years, it has been demonstrated that deep learning
models are a promising solution to the challenges of NLP. This paper reviews
the latest studies that have employed deep learning to solve sentiment analysis
problems, such as sentiment polarity. Models using term frequency-inverse
document frequency (TF-IDF) and word embedding have been applied to a series of
datasets. Finally, a comparative study has been conducted on the experimental
results obtained for the different models and input features
Related papers
- Dynamic Sentiment Analysis with Local Large Language Models using Majority Voting: A Study on Factors Affecting Restaurant Evaluation [0.0]
This study introduces a majority voting mechanism to a sentiment analysis model using local language models.
By a series of three analyses of online reviews on restaurant evaluations, we demonstrate that majority voting with multiple attempts produces more robust results than using a large model with a single attempt.
arXiv Detail & Related papers (2024-07-18T00:28:04Z) - A Deep Convolutional Neural Network-based Model for Aspect and Polarity Classification in Hausa Movie Reviews [0.0]
This paper introduces a novel Deep Convolutional Neural Network (CNN)-based model tailored for aspect and polarity classification in Hausa movie reviews.
The proposed model combines CNNs with attention mechanisms for aspect-word prediction, leveraging contextual information and sentiment polarities.
With 91% accuracy on aspect term extraction and 92% on sentiment polarity classification, the model outperforms traditional machine models.
arXiv Detail & Related papers (2024-05-29T23:45:42Z) - Lessons from the Trenches on Reproducible Evaluation of Language Models [60.522749986793094]
We draw on three years of experience in evaluating large language models to provide guidance and lessons for researchers.
We present the Language Model Evaluation Harness (lm-eval), an open source library for independent, reproducible, and evaluation of language models.
arXiv Detail & Related papers (2024-05-23T16:50:49Z) - Sensitivity, Performance, Robustness: Deconstructing the Effect of
Sociodemographic Prompting [64.80538055623842]
sociodemographic prompting is a technique that steers the output of prompt-based models towards answers that humans with specific sociodemographic profiles would give.
We show that sociodemographic information affects model predictions and can be beneficial for improving zero-shot learning in subjective NLP tasks.
arXiv Detail & Related papers (2023-09-13T15:42:06Z) - An Expert's Guide to Training Physics-informed Neural Networks [5.198985210238479]
Physics-informed neural networks (PINNs) have been popularized as a deep learning framework.
PINNs can seamlessly synthesize observational data and partial differential equation (PDE) constraints.
We present a series of best practices that can significantly improve the training efficiency and overall accuracy of PINNs.
arXiv Detail & Related papers (2023-08-16T16:19:25Z) - A quantitative study of NLP approaches to question difficulty estimation [0.30458514384586394]
This work quantitatively analyzes several approaches proposed in previous research, and comparing their performance on datasets from different educational domains.
We find that Transformer based models are the best performing across different educational domains, with DistilBERT performing almost as well as BERT.
As for the other models, the hybrid ones often outperform the ones based on a single type of features, the ones based on linguistic features perform well on reading comprehension questions, while frequency based features (TF-IDF) and word embeddings (word2vec) perform better in domain knowledge assessment.
arXiv Detail & Related papers (2023-05-17T14:26:00Z) - Investigating Fairness Disparities in Peer Review: A Language Model
Enhanced Approach [77.61131357420201]
We conduct a thorough and rigorous study on fairness disparities in peer review with the help of large language models (LMs)
We collect, assemble, and maintain a comprehensive relational database for the International Conference on Learning Representations (ICLR) conference from 2017 to date.
We postulate and study fairness disparities on multiple protective attributes of interest, including author gender, geography, author, and institutional prestige.
arXiv Detail & Related papers (2022-11-07T16:19:42Z) - Aspect-Based Sentiment Analysis using Local Context Focus Mechanism with
DeBERTa [23.00810941211685]
Aspect-Based Sentiment Analysis (ABSA) is a fine-grained task in the field of sentiment analysis.
Recent DeBERTa model (Decoding-enhanced BERT with disentangled attention) to solve Aspect-Based Sentiment Analysis problem.
arXiv Detail & Related papers (2022-07-06T03:50:31Z) - AES Systems Are Both Overstable And Oversensitive: Explaining Why And
Proposing Defenses [66.49753193098356]
We investigate the reason behind the surprising adversarial brittleness of scoring models.
Our results indicate that autoscoring models, despite getting trained as "end-to-end" models, behave like bag-of-words models.
We propose detection-based protection models that can detect oversensitivity and overstability causing samples with high accuracies.
arXiv Detail & Related papers (2021-09-24T03:49:38Z) - Latent Opinions Transfer Network for Target-Oriented Opinion Words
Extraction [63.70885228396077]
We propose a novel model to transfer opinions knowledge from resource-rich review sentiment classification datasets to low-resource task TOWE.
Our model achieves better performance compared to other state-of-the-art methods and significantly outperforms the base model without transferring opinions knowledge.
arXiv Detail & Related papers (2020-01-07T11:50:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.