Dialect Identification in Nuanced Arabic Tweets Using Farasa
Segmentation and AraBERT
- URL: http://arxiv.org/abs/2102.09749v2
- Date: Mon, 22 Feb 2021 06:51:48 GMT
- Title: Dialect Identification in Nuanced Arabic Tweets Using Farasa
Segmentation and AraBERT
- Authors: Anshul Wadhawan
- Abstract summary: This paper presents our approach to address the EACL WANLP-2021 Shared Task 1: Nuanced Arabic Dialect Identification (NADI)
The task is aimed at developing a system that identifies the geographical location(country/province) from where an Arabic tweet in the form of modern standard Arabic or dialect comes from.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents our approach to address the EACL WANLP-2021 Shared Task
1: Nuanced Arabic Dialect Identification (NADI). The task is aimed at
developing a system that identifies the geographical location(country/province)
from where an Arabic tweet in the form of modern standard Arabic or dialect
comes from. We solve the task in two parts. The first part involves
pre-processing the provided dataset by cleaning, adding and segmenting various
parts of the text. This is followed by carrying out experiments with different
versions of two Transformer based models, AraBERT and AraELECTRA. Our final
approach achieved macro F1-scores of 0.216, 0.235, 0.054, and 0.043 in the four
subtasks, and we were ranked second in MSA identification subtasks and fourth
in DA identification subtasks.
Related papers
- NADI 2024: The Fifth Nuanced Arabic Dialect Identification Shared Task [28.40134178913119]
We describe the findings of the fifth Nuanced Arabic Dialect Identification Shared Task (NADI 2024)
NADI 2024 targeted both dialect identification cast as a multi-label task and identification of the Arabic level of dialectness.
Winning teams achieved 50.57 Ftextsubscript1 on Subtask1, 0.1403 RMSE for Subtask2, and 20.44 BLEU in Subtask3, respectively.
arXiv Detail & Related papers (2024-07-06T01:18:58Z) - SemEval-2024 Task 8: Multidomain, Multimodel and Multilingual Machine-Generated Text Detection [68.858931667807]
Subtask A is a binary classification task determining whether a text is written by a human or generated by a machine.
Subtask B is to detect the exact source of a text, discerning whether it is written by a human or generated by a specific LLM.
Subtask C aims to identify the changing point within a text, at which the authorship transitions from human to machine.
arXiv Detail & Related papers (2024-04-22T13:56:07Z) - ArabicMMLU: Assessing Massive Multitask Language Understanding in Arabic [51.922112625469836]
We present datasetname, the first multi-task language understanding benchmark for the Arabic language.
Our data comprises 40 tasks and 14,575 multiple-choice questions in Modern Standard Arabic (MSA) and is carefully constructed by collaborating with native speakers in the region.
Our evaluations of 35 models reveal substantial room for improvement, particularly among the best open-source models.
arXiv Detail & Related papers (2024-02-20T09:07:41Z) - Mavericks at NADI 2023 Shared Task: Unravelling Regional Nuances through
Dialect Identification using Transformer-based Approach [0.0]
We highlight our methodology for subtask 1 which deals with country-level dialect identification.
The task uses the Twitter dataset (TWT-2023) that encompasses 18 dialects for the multi-class classification problem.
We achieved an F1-score of 76.65 (11th rank on the leaderboard) on the test dataset.
arXiv Detail & Related papers (2023-11-30T17:37:56Z) - SLUE Phase-2: A Benchmark Suite of Diverse Spoken Language Understanding
Tasks [88.4408774253634]
Spoken language understanding (SLU) tasks have been studied for many decades in the speech research community.
There are not nearly as many SLU task benchmarks, and many of the existing ones use data that is not freely available to all researchers.
Recent work has begun to introduce such benchmark for several tasks.
arXiv Detail & Related papers (2022-12-20T18:39:59Z) - Transformer-based Model for Word Level Language Identification in
Code-mixed Kannada-English Texts [55.41644538483948]
We propose the use of a Transformer based model for word-level language identification in code-mixed Kannada English texts.
The proposed model on the CoLI-Kenglish dataset achieves a weighted F1-score of 0.84 and a macro F1-score of 0.61.
arXiv Detail & Related papers (2022-11-26T02:39:19Z) - Bridging Cross-Lingual Gaps During Leveraging the Multilingual
Sequence-to-Sequence Pretraining for Text Generation [80.16548523140025]
We extend the vanilla pretrain-finetune pipeline with extra code-switching restore task to bridge the gap between the pretrain and finetune stages.
Our approach could narrow the cross-lingual sentence representation distance and improve low-frequency word translation with trivial computational cost.
arXiv Detail & Related papers (2022-04-16T16:08:38Z) - AraBERT and Farasa Segmentation Based Approach For Sarcasm and Sentiment
Detection in Arabic Tweets [0.0]
One of the subtasks aims at developing a system that identifies whether a given Arabic tweet is sarcastic in nature or not.
The other aims to identify the sentiment of the Arabic tweet.
Our final approach was ranked seventh and fourth in the Sarcasm and Sentiment Detection subtasks respectively.
arXiv Detail & Related papers (2021-03-02T12:33:50Z) - Arabic Dialect Identification Using BERT-Based Domain Adaptation [0.0]
Arabic is one of the most important and growing languages in the world.
With the rise of social media platforms such as Twitter, Arabic spoken dialects have become more in use.
arXiv Detail & Related papers (2020-11-13T15:52:51Z) - Explicit Alignment Objectives for Multilingual Bidirectional Encoders [111.65322283420805]
We present a new method for learning multilingual encoders, AMBER (Aligned Multilingual Bi-directional EncodeR)
AMBER is trained on additional parallel data using two explicit alignment objectives that align the multilingual representations at different granularities.
Experimental results show that AMBER obtains gains of up to 1.1 average F1 score on sequence tagging and up to 27.3 average accuracy on retrieval over the XLMR-large model.
arXiv Detail & Related papers (2020-10-15T18:34:13Z) - Multi-Dialect Arabic BERT for Country-Level Dialect Identification [1.2928709656541642]
We present the experiments conducted, and the models developed by our competing team, Mawdoo3 AI.
The dialect identification subtask provides 21,000 country-level labeled tweets covering all 21 Arab countries.
We publicly release the pre-trained language model component of our winning solution under the name of Multi-dialect-Arabic-BERT model.
arXiv Detail & Related papers (2020-07-10T21:11:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.