Automatic Error Type Annotation for Arabic
- URL: http://arxiv.org/abs/2109.08068v1
- Date: Thu, 16 Sep 2021 15:50:11 GMT
- Title: Automatic Error Type Annotation for Arabic
- Authors: Riadh Belkebir and Nizar Habash
- Abstract summary: We present ARETA, an automatic error type annotation system for Modern Standard Arabic.
We base our error taxonomy on the Arabic Learner Corpus (ALC) Error Tagset with some modifications.
ARETA achieves a performance of 85.8% (micro average F1 score) on a manually annotated blind test portion of ALC.
- Score: 20.51341894424478
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present ARETA, an automatic error type annotation system for Modern
Standard Arabic. We design ARETA to address Arabic's morphological richness and
orthographic ambiguity. We base our error taxonomy on the Arabic Learner Corpus
(ALC) Error Tagset with some modifications. ARETA achieves a performance of
85.8% (micro average F1 score) on a manually annotated blind test portion of
ALC. We also demonstrate ARETA's usability by applying it to a number of
submissions from the QALB 2014 shared task for Arabic grammatical error
correction. The resulting analyses give helpful insights on the strengths and
weaknesses of different submissions, which is more useful than the opaque M2
scoring metrics used in the shared task. ARETA employs a large Arabic
morphological analyzer, but is completely unsupervised otherwise. We make ARETA
publicly available.
Related papers
- Tibyan Corpus: Balanced and Comprehensive Error Coverage Corpus Using ChatGPT for Arabic Grammatical Error Correction [0.32885740436059047]
This study aims to develop an Arabic corpus called "Tibyan" for grammatical error correction using ChatGPT.
ChatGPT is used as a data augmenter tool based on a pair of Arabic sentences containing grammatical errors matched with a sentence free of errors extracted from Arabic books.
Our corpus contained 49 of errors, including seven types: orthography, syntax, semantics, punctuation, morphology, and split.
arXiv Detail & Related papers (2024-11-07T10:17:40Z) - Strategies for Arabic Readability Modeling [9.976720880041688]
Automatic readability assessment is relevant to building NLP applications for education, content analysis, and accessibility.
We present a set of experimental results on Arabic readability assessment using a diverse range of approaches.
arXiv Detail & Related papers (2024-07-03T11:54:11Z) - Arabic Text Sentiment Analysis: Reinforcing Human-Performed Surveys with
Wider Topic Analysis [49.1574468325115]
The in-depth study manually analyses 133 ASA papers published in the English language between 2002 and 2020.
The main findings show the different approaches used for ASA: machine learning, lexicon-based and hybrid approaches.
There is a need to develop ASA tools that can be used in industry, as well as in academia, for Arabic text SA.
arXiv Detail & Related papers (2024-03-04T10:37:48Z) - ArabianGPT: Native Arabic GPT-based Large Language Model [2.8623940003518156]
This paper proposes ArabianGPT, a series of transformer-based models within the ArabianLLM suite designed explicitly for Arabic.
The AraNizer tokenizer, integral to these models, addresses the unique morphological aspects of Arabic script.
For sentiment analysis, the fine-tuned ArabianGPT-0.1B model achieved a remarkable accuracy of 95%, a substantial increase from the base model's 56%.
arXiv Detail & Related papers (2024-02-23T13:32:47Z) - ArabicMMLU: Assessing Massive Multitask Language Understanding in Arabic [51.922112625469836]
We present datasetname, the first multi-task language understanding benchmark for the Arabic language.
Our data comprises 40 tasks and 14,575 multiple-choice questions in Modern Standard Arabic (MSA) and is carefully constructed by collaborating with native speakers in the region.
Our evaluations of 35 models reveal substantial room for improvement, particularly among the best open-source models.
arXiv Detail & Related papers (2024-02-20T09:07:41Z) - Understanding and Mitigating Classification Errors Through Interpretable
Token Patterns [58.91023283103762]
Characterizing errors in easily interpretable terms gives insight into whether a classifier is prone to making systematic errors.
We propose to discover those patterns of tokens that distinguish correct and erroneous predictions.
We show that our method, Premise, performs well in practice.
arXiv Detail & Related papers (2023-11-18T00:24:26Z) - ALDi: Quantifying the Arabic Level of Dialectness of Text [17.37857915257019]
We argue that Arabic speakers perceive a spectrum of dialectness, which we operationalize at the sentence level as the Arabic Level of Dialectness (ALDi)
We provide a detailed analysis of AOC-ALDi and show that a model trained on it can effectively identify levels of dialectness on a range of other corpora.
arXiv Detail & Related papers (2023-10-20T18:07:39Z) - AceGPT, Localizing Large Language Models in Arabic [73.39989503874634]
The paper proposes a comprehensive solution that includes pre-training with Arabic texts, Supervised Fine-Tuning (SFT) utilizing native Arabic instructions, and GPT-4 responses in Arabic.
The goal is to cultivate culturally cognizant and value-aligned Arabic LLMs capable of accommodating the diverse, application-specific needs of Arabic-speaking communities.
arXiv Detail & Related papers (2023-09-21T13:20:13Z) - Offensive Language Detection in Under-resourced Algerian Dialectal
Arabic Language [0.0]
We focus on the Algerian dialectal Arabic which is one of under-resourced languages.
Due to the scarcity of works on the same language, we have built a new corpus regrouping more than 8.7k texts manually annotated as normal, abusive and offensive.
arXiv Detail & Related papers (2022-03-18T15:42:21Z) - AES Systems Are Both Overstable And Oversensitive: Explaining Why And
Proposing Defenses [66.49753193098356]
We investigate the reason behind the surprising adversarial brittleness of scoring models.
Our results indicate that autoscoring models, despite getting trained as "end-to-end" models, behave like bag-of-words models.
We propose detection-based protection models that can detect oversensitivity and overstability causing samples with high accuracies.
arXiv Detail & Related papers (2021-09-24T03:49:38Z) - On the Robustness of Language Encoders against Grammatical Errors [66.05648604987479]
We collect real grammatical errors from non-native speakers and conduct adversarial attacks to simulate these errors on clean text data.
Results confirm that the performance of all tested models is affected but the degree of impact varies.
arXiv Detail & Related papers (2020-05-12T11:01:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.