Arabic aspect based sentiment analysis using BERT
- URL: http://arxiv.org/abs/2107.13290v1
- Date: Wed, 28 Jul 2021 11:34:00 GMT
- Title: Arabic aspect based sentiment analysis using BERT
- Authors: Mohammed M.Abdelgwad
- Abstract summary: This article explores the modeling capabilities of contextual embeddings from pre-trained language models, such as BERT.
We are building a simple but effective BERT-based neural baseline to handle this task.
Our BERT architecture with a simple linear classification layer surpassed the state-of-the-art works, according to the experimental results.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Aspect-based sentiment analysis(ABSA) is a textual analysis methodology that
defines the polarity of opinions on certain aspects related to specific
targets. The majority of research on ABSA is in English, with a small amount of
work available in Arabic. Most previous Arabic research has relied on deep
learning models that depend primarily on context-independent word embeddings
(e.g.word2vec), where each word has a fixed representation independent of its
context. This article explores the modeling capabilities of contextual
embeddings from pre-trained language models, such as BERT, and making use of
sentence pair input on Arabic ABSA tasks. In particular, we are building a
simple but effective BERT-based neural baseline to handle this task. Our BERT
architecture with a simple linear classification layer surpassed the
state-of-the-art works, according to the experimental results on the
benchmarked Arabic hotel reviews dataset.
Related papers
- ROAST: Review-level Opinion Aspect Sentiment Target Joint Detection for ABSA [50.90538760832107]
This research presents a novel task, Review-Level Opinion Aspect Sentiment Target (ROAST)
ROAST seeks to close the gap between sentence-level and text-level ABSA by identifying every ABSA constituent at the review level.
We extend the available datasets to enable ROAST, addressing the drawbacks noted in previous research.
arXiv Detail & Related papers (2024-05-30T17:29:15Z) - Arabic Text Sentiment Analysis: Reinforcing Human-Performed Surveys with
Wider Topic Analysis [49.1574468325115]
The in-depth study manually analyses 133 ASA papers published in the English language between 2002 and 2020.
The main findings show the different approaches used for ASA: machine learning, lexicon-based and hybrid approaches.
There is a need to develop ASA tools that can be used in industry, as well as in academia, for Arabic text SA.
arXiv Detail & Related papers (2024-03-04T10:37:48Z) - Arabic Sentiment Analysis with Noisy Deep Explainable Model [48.22321420680046]
This paper proposes an explainable sentiment classification framework for the Arabic language.
The proposed framework can explain specific predictions by training a local surrogate explainable model.
We carried out experiments on public benchmark Arabic SA datasets.
arXiv Detail & Related papers (2023-09-24T19:26:53Z) - Pre-trained Embeddings for Entity Resolution: An Experimental Analysis
[Experiment, Analysis & Benchmark] [65.11858854040544]
We perform a thorough experimental analysis of 12 popular language models over 17 established benchmark datasets.
First, we assess their vectorization overhead for converting all input entities into dense embeddings vectors.
Second, we investigate their blocking performance, performing a detailed scalability analysis, and comparing them with the state-of-the-art deep learning-based blocking method.
Third, we conclude with their relative performance for both supervised and unsupervised matching.
arXiv Detail & Related papers (2023-04-24T08:53:54Z) - Arabic aspect based sentiment analysis using bidirectional GRU based
models [0.0]
Aspect-based Sentiment analysis (ABSA) accomplishes a fine-grained analysis that defines the aspects of a given document or sentence.
We propose two models based on Gated Recurrent Units (GRU) neural networks for ABSA.
We evaluate our models using the benchmarked Arabic hotel reviews dataset.
arXiv Detail & Related papers (2021-01-23T02:54:30Z) - Effect of Word Embedding Variable Parameters on Arabic Sentiment
Analysis Performance [0.0]
Social media such as Twitter, Facebook, etc. has led to a generated growing number of comments that contains users opinions.
This study will discuss three parameters (Window size, Dimension of vector and Negative Sample) for Arabic sentiment analysis.
Four binary classifiers (Logistic Regression, Decision Tree, Support Vector Machine and Naive Bayes) are used to detect sentiment.
arXiv Detail & Related papers (2021-01-08T08:31:00Z) - Exploiting BERT to improve aspect-based sentiment analysis performance
on Persian language [0.0]
This research shows the potential of using pre-trained BERT model and taking advantage of using sentence-pair input on an ABSA task.
The results indicate that employing Pars-BERT pre-trained model along with natural language inference auxiliary sentence (NLI-M) could boost the ABSA task accuracy up to 91%.
arXiv Detail & Related papers (2020-12-02T16:47:20Z) - Understanding Pre-trained BERT for Aspect-based Sentiment Analysis [71.40586258509394]
This paper analyzes the pre-trained hidden representations learned from reviews on BERT for tasks in aspect-based sentiment analysis (ABSA)
It is not clear how the general proxy task of (masked) language model trained on unlabeled corpus without annotations of aspects or opinions can provide important features for downstream tasks in ABSA.
arXiv Detail & Related papers (2020-10-31T02:21:43Z) - A Comparative Study on Structural and Semantic Properties of Sentence
Embeddings [77.34726150561087]
We propose a set of experiments using a widely-used large-scale data set for relation extraction.
We show that different embedding spaces have different degrees of strength for the structural and semantic properties.
These results provide useful information for developing embedding-based relation extraction methods.
arXiv Detail & Related papers (2020-09-23T15:45:32Z) - AraBERT: Transformer-based Model for Arabic Language Understanding [0.0]
We pre-trained BERT specifically for the Arabic language in the pursuit of achieving the same success that BERT did for the English language.
The results showed that the newly developed AraBERT achieved state-of-the-art performance on most tested Arabic NLP tasks.
arXiv Detail & Related papers (2020-02-28T22:59:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.