Political Sentiment Analysis of Persian Tweets Using CNN-LSTM Model
- URL: http://arxiv.org/abs/2307.07740v2
- Date: Tue, 29 Aug 2023 16:55:11 GMT
- Title: Political Sentiment Analysis of Persian Tweets Using CNN-LSTM Model
- Authors: Mohammad Dehghani, Zahra Yazdanparast
- Abstract summary: We present several machine learning and a deep learning model to analysis sentiment of Persian political tweets.
Deep learning with ParsBERT embedding performs better than machine learning.
- Score: 0.356008609689971
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Sentiment analysis is the process of identifying and categorizing people's
emotions or opinions regarding various topics. The analysis of Twitter
sentiment has become an increasingly popular topic in recent years. In this
paper, we present several machine learning and a deep learning model to
analysis sentiment of Persian political tweets. Our analysis was conducted
using Bag of Words and ParsBERT for word representation. We applied Gaussian
Naive Bayes, Gradient Boosting, Logistic Regression, Decision Trees, Random
Forests, as well as a combination of CNN and LSTM to classify the polarities of
tweets. The results of this study indicate that deep learning with ParsBERT
embedding performs better than machine learning. The CNN-LSTM model had the
highest classification accuracy with 89 percent on the first dataset and 71
percent on the second dataset. Due to the complexity of Persian, it was a
difficult task to achieve this level of efficiency. The main objective of our
research was to reduce the training time while maintaining the model's
performance. As a result, several adjustments were made to the model
architecture and parameters. In addition to achieving the objective, the
performance was slightly improved as well.
Related papers
- Convolutional Neural Networks for Sentiment Analysis on Weibo Data: A
Natural Language Processing Approach [0.228438857884398]
This study addresses the complex task of sentiment analysis on a dataset of 119,988 original tweets from Weibo using a Convolutional Neural Network (CNN)
A CNN-based model was utilized, leveraging word embeddings for feature extraction, and trained to perform sentiment classification.
The model achieved a macro-average F1-score of approximately 0.73 on the test set, showing balanced performance across positive, neutral, and negative sentiments.
arXiv Detail & Related papers (2023-07-13T03:02:56Z) - Unsupervised Sentiment Analysis of Plastic Surgery Social Media Posts [91.3755431537592]
The massive collection of user posts across social media platforms is primarily untapped for artificial intelligence (AI) use cases.
Natural language processing (NLP) is a subfield of AI that leverages bodies of documents, known as corpora, to train computers in human-like language understanding.
This study demonstrates that the applied results of unsupervised analysis allow a computer to predict either negative, positive, or neutral user sentiment towards plastic surgery.
arXiv Detail & Related papers (2023-07-05T20:16:20Z) - Constructing Colloquial Dataset for Persian Sentiment Analysis of Social
Microblogs [0.0]
This paper first constructs a user opinion dataset called ITRC-Opinion in a collaborative environment and insource way.
Our dataset contains 60,000 informal and colloquial Persian texts from social microblogs such as Twitter and Instagram.
Second, this study proposes a new architecture based on the convolutional neural network (CNN) model for more effective sentiment analysis of colloquial text in social microblog posts.
arXiv Detail & Related papers (2023-06-22T05:51:22Z) - Robust Learning with Progressive Data Expansion Against Spurious
Correlation [65.83104529677234]
We study the learning process of a two-layer nonlinear convolutional neural network in the presence of spurious features.
Our analysis suggests that imbalanced data groups and easily learnable spurious features can lead to the dominance of spurious features during the learning process.
We propose a new training algorithm called PDE that efficiently enhances the model's robustness for a better worst-group performance.
arXiv Detail & Related papers (2023-06-08T05:44:06Z) - ASPEST: Bridging the Gap Between Active Learning and Selective
Prediction [56.001808843574395]
Selective prediction aims to learn a reliable model that abstains from making predictions when uncertain.
Active learning aims to lower the overall labeling effort, and hence human dependence, by querying the most informative examples.
In this work, we introduce a new learning paradigm, active selective prediction, which aims to query more informative samples from the shifted target domain.
arXiv Detail & Related papers (2023-04-07T23:51:07Z) - Empirical evaluation of shallow and deep learning classifiers for Arabic
sentiment analysis [1.1172382217477126]
This work presents a detailed comparison of the performance of deep learning models for sentiment analysis of Arabic reviews.
The datasets used in this study are multi-dialect Arabic hotel and book review datasets, which are some of the largest publicly available datasets for Arabic reviews.
Results showed deep learning outperforming shallow learning for binary and multi-label classification, in contrast with the results of similar work reported in the literature.
arXiv Detail & Related papers (2021-12-01T14:45:43Z) - Multi-Class and Automated Tweet Categorization [0.0]
The study reported here aims to detect the tweet category from its text.
The tweet is categorized under 12 specified categories using Text Mining or Natural Language Processing (NLP), and Machine Learning (ML) techniques.
The best ensemble model named, Gradient Boosting achieved an AUC score of 85%.
arXiv Detail & Related papers (2021-11-13T14:28:47Z) - Improving Classifier Training Efficiency for Automatic Cyberbullying
Detection with Feature Density [58.64907136562178]
We study the effectiveness of Feature Density (FD) using different linguistically-backed feature preprocessing methods.
We hypothesise that estimating dataset complexity allows for the reduction of the number of required experiments.
The difference in linguistic complexity of datasets allows us to additionally discuss the efficacy of linguistically-backed word preprocessing.
arXiv Detail & Related papers (2021-11-02T15:48:28Z) - AES Systems Are Both Overstable And Oversensitive: Explaining Why And
Proposing Defenses [66.49753193098356]
We investigate the reason behind the surprising adversarial brittleness of scoring models.
Our results indicate that autoscoring models, despite getting trained as "end-to-end" models, behave like bag-of-words models.
We propose detection-based protection models that can detect oversensitivity and overstability causing samples with high accuracies.
arXiv Detail & Related papers (2021-09-24T03:49:38Z) - Sentiment analysis in tweets: an assessment study from classical to
modern text representation models [59.107260266206445]
Short texts published on Twitter have earned significant attention as a rich source of information.
Their inherent characteristics, such as the informal, and noisy linguistic style, remain challenging to many natural language processing (NLP) tasks.
This study fulfils an assessment of existing language models in distinguishing the sentiment expressed in tweets by using a rich collection of 22 datasets.
arXiv Detail & Related papers (2021-05-29T21:05:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.