Partial Diacritization: A Context-Contrastive Inference Approach
- URL: http://arxiv.org/abs/2401.08919v2
- Date: Mon, 22 Jan 2024 19:07:07 GMT
- Title: Partial Diacritization: A Context-Contrastive Inference Approach
- Authors: Muhammad ElNokrashy, Badr AlKhamissi
- Abstract summary: Diacritization plays a pivotal role in improving readability and disambiguating the meaning of Arabic texts.
Partial Diacritzation (PD) is the selection of a subset of characters to be marked to aid comprehension where needed.
We introduce Context-Contrastive Partial Diacritization (CCPD), a novel approach to PD which integrates seamlessly with existing Arabic diacritization systems.
- Score: 0.6587258071269679
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diacritization plays a pivotal role in improving readability and
disambiguating the meaning of Arabic texts. Efforts have so far focused on
marking every eligible character (Full Diacritization). Comparatively
overlooked, Partial Diacritzation (PD) is the selection of a subset of
characters to be marked to aid comprehension where needed. Research has
indicated that excessive diacritic marks can hinder skilled readers--reducing
reading speed and accuracy. We conduct a behavioral experiment and show that
partially marked text is often easier to read than fully marked text, and
sometimes easier than plain text. In this light, we introduce
Context-Contrastive Partial Diacritization (CCPD)--a novel approach to PD which
integrates seamlessly with existing Arabic diacritization systems. CCPD
processes each word twice, once with context and once without, and diacritizes
only the characters with disparities between the two inferences. Further, we
introduce novel indicators for measuring partial diacritization quality (SR,
PDER, HDER, ERE), essential for establishing this as a machine learning task.
Lastly, we introduce TD2, a Transformer-variant of an established model which
offers a markedly different performance profile on our proposed indicators
compared to all other known systems.
Related papers
- Spotting AI's Touch: Identifying LLM-Paraphrased Spans in Text [61.22649031769564]
We propose a novel framework, paraphrased text span detection (PTD)
PTD aims to identify paraphrased text spans within a text.
We construct a dedicated dataset, PASTED, for paraphrased text span detection.
arXiv Detail & Related papers (2024-05-21T11:22:27Z) - CoheSentia: A Novel Benchmark of Incremental versus Holistic Assessment
of Coherence in Generated Texts [15.866519123942457]
We introduce sc CoheSentia, a novel benchmark of human-perceived coherence of automatically generated texts.
Our benchmark contains 500 automatically-generated and human-annotated paragraphs, each annotated in both methods.
Our analysis shows that the inter-annotator agreement in the incremental mode is higher than in the holistic alternative.
arXiv Detail & Related papers (2023-10-25T03:21:20Z) - Generating Summaries with Controllable Readability Levels [67.34087272813821]
Several factors affect the readability level, such as the complexity of the text, its subject matter, and the reader's background knowledge.
Current text generation approaches lack refined control, resulting in texts that are not customized to readers' proficiency levels.
We develop three text generation techniques for controlling readability: instruction-based readability control, reinforcement learning to minimize the gap between requested and observed readability, and a decoding approach that uses look-ahead to estimate the readability of upcoming decoding steps.
arXiv Detail & Related papers (2023-10-16T17:46:26Z) - Take the Hint: Improving Arabic Diacritization with
Partially-Diacritized Text [4.863310073296471]
We propose 2SDiac, a multi-source model that can effectively support optional diacritics in input to inform all predictions.
We also introduce Guided Learning, a training scheme to leverage given diacritics in input with different levels of random masking.
arXiv Detail & Related papers (2023-06-06T10:18:17Z) - TextFormer: A Query-based End-to-End Text Spotter with Mixed Supervision [61.186488081379]
We propose TextFormer, a query-based end-to-end text spotter with Transformer architecture.
TextFormer builds upon an image encoder and a text decoder to learn a joint semantic understanding for multi-task modeling.
It allows for mutual training and optimization of classification, segmentation, and recognition branches, resulting in deeper feature sharing.
arXiv Detail & Related papers (2023-06-06T03:37:41Z) - Improving Scene Text Recognition for Character-Level Long-Tailed
Distribution [35.14058653707104]
We propose a novel Context-Aware and Free Experts Network (CAFE-Net) using two experts.
CAFE-Net improves the STR performance on languages containing numerous number of characters.
arXiv Detail & Related papers (2023-03-31T06:11:33Z) - End-to-End Page-Level Assessment of Handwritten Text Recognition [69.55992406968495]
HTR systems increasingly face the end-to-end page-level transcription of a document.
Standard metrics do not take into account the inconsistencies that might appear.
We propose a two-fold evaluation, where the transcription accuracy and the RO goodness are considered separately.
arXiv Detail & Related papers (2023-01-14T15:43:07Z) - Towards Weakly-Supervised Text Spotting using a Multi-Task Transformer [21.479222207347238]
We introduce TextTranSpotter (TTS), a transformer-based approach for text spotting.
TTS is trained with both fully- and weakly-supervised settings.
trained in a fully-supervised manner, TextTranSpotter shows state-of-the-art results on multiple benchmarks.
arXiv Detail & Related papers (2022-02-11T08:50:09Z) - Text is Text, No Matter What: Unifying Text Recognition using Knowledge
Distillation [41.43280922432707]
We argue for their unification -- we aim for a single model that can compete favourably with two separate state-of-the-art STR and HTR models.
We first show that cross-utilisation of STR and HTR models trigger significant performance drops due to differences in their inherent challenges.
We then tackle their union by introducing a knowledge distillation (KD) based framework.
arXiv Detail & Related papers (2021-07-26T10:10:34Z) - A Novel Attention-based Aggregation Function to Combine Vision and
Language [55.7633883960205]
We propose a novel fully-attentive reduction method for vision and language.
Specifically, our approach computes a set of scores for each element of each modality employing a novel variant of cross-attention.
We test our approach on image-text matching and visual question answering, building fair comparisons with other reduction choices.
arXiv Detail & Related papers (2020-04-27T18:09:46Z) - Text Perceptron: Towards End-to-End Arbitrary-Shaped Text Spotting [49.768327669098674]
We propose an end-to-end trainable text spotting approach named Text Perceptron.
It first employs an efficient segmentation-based text detector that learns the latent text reading order and boundary information.
Then a novel Shape Transform Module (abbr. STM) is designed to transform the detected feature regions into regular morphologies.
arXiv Detail & Related papers (2020-02-17T08:07:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.