Assessing the Efficacy of Grammar Error Correction: A Human Evaluation
Approach in the Japanese Context
- URL: http://arxiv.org/abs/2402.18101v2
- Date: Thu, 29 Feb 2024 10:53:40 GMT
- Title: Assessing the Efficacy of Grammar Error Correction: A Human Evaluation
Approach in the Japanese Context
- Authors: Qiao Wang and Zheng Yuan
- Abstract summary: We evaluate the performance of the state-of-the-art sequence tagging grammar error detection and correction model (SeqTagger)
With an automatic annotation toolkit, ERRANT, we first evaluated SeqTagger's performance on error correction with human expert correction as the benchmark.
Results indicated a precision of 63.66% and a recall of 20.19% for error correction in the full dataset.
- Score: 10.047123247001714
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this study, we evaluated the performance of the state-of-the-art sequence
tagging grammar error detection and correction model (SeqTagger) using Japanese
university students' writing samples. With an automatic annotation toolkit,
ERRANT, we first evaluated SeqTagger's performance on error correction with
human expert correction as the benchmark. Then a human-annotated approach was
adopted to evaluate Seqtagger's performance in error detection using a subset
of the writing dataset. Results indicated a precision of 63.66% and a recall of
20.19% for error correction in the full dataset. For the subset, after manual
exclusion of irrelevant errors such as semantic and mechanical ones, the model
shows an adjusted precision of 97.98% and an adjusted recall of 42.98% for
error detection, indicating the model's high accuracy but also its
conservativeness. Thematic analysis on errors undetected by the model revealed
that determiners and articles, especially the latter, were predominant.
Specifically, in terms of context-independent errors, the model occasionally
overlooked basic ones and faced challenges with overly erroneous or complex
structures. Meanwhile, context-dependent errors, notably those related to tense
and noun number, as well as those possibly influenced by the students' first
language (L1), remained particularly challenging.
Related papers
- A Coin Has Two Sides: A Novel Detector-Corrector Framework for Chinese Spelling Correction [79.52464132360618]
Chinese Spelling Correction (CSC) stands as a foundational Natural Language Processing (NLP) task.
We introduce a novel approach based on error detector-corrector framework.
Our detector is designed to yield two error detection results, each characterized by high precision and recall.
arXiv Detail & Related papers (2024-09-06T09:26:45Z) - MISMATCH: Fine-grained Evaluation of Machine-generated Text with
Mismatch Error Types [68.76742370525234]
We propose a new evaluation scheme to model human judgments in 7 NLP tasks, based on the fine-grained mismatches between a pair of texts.
Inspired by the recent efforts in several NLP tasks for fine-grained evaluation, we introduce a set of 13 mismatch error types.
We show that the mismatch errors between the sentence pairs on the held-out datasets from 7 NLP tasks align well with the human evaluation.
arXiv Detail & Related papers (2023-06-18T01:38:53Z) - Towards Fine-Grained Information: Identifying the Type and Location of
Translation Errors [80.22825549235556]
Existing approaches can not synchronously consider error position and type.
We build an FG-TED model to predict the textbf addition and textbfomission errors.
Experiments show that our model can identify both error type and position concurrently, and gives state-of-the-art results.
arXiv Detail & Related papers (2023-02-17T16:20:33Z) - Contrastive Error Attribution for Finetuned Language Models [35.80256755393739]
noisy and misannotated data is a core cause of hallucinations and unfaithful outputs in Natural Language Generation (NLG) tasks.
We introduce a framework to identify and remove low-quality training instances that lead to undesirable outputs.
We show that existing approaches for error tracing, such as gradient-based influence measures, do not perform reliably for detecting faithfulness errors.
arXiv Detail & Related papers (2022-12-21T02:28:07Z) - uChecker: Masked Pretrained Language Models as Unsupervised Chinese
Spelling Checkers [23.343006562849126]
We propose a framework named textbfuChecker to conduct unsupervised spelling error detection and correction.
Masked pretrained language models such as BERT are introduced as the backbone model.
Benefiting from the various and flexible MASKing operations, we propose a Confusionset-guided masking strategy to fine-train the masked language model.
arXiv Detail & Related papers (2022-09-15T05:57:12Z) - Improving Pre-trained Language Models with Syntactic Dependency
Prediction Task for Chinese Semantic Error Recognition [52.55136323341319]
Existing Chinese text error detection mainly focuses on spelling and simple grammatical errors.
Chinese semantic errors are understudied and more complex that humans cannot easily recognize.
arXiv Detail & Related papers (2022-04-15T13:55:32Z) - Detecting Errors and Estimating Accuracy on Unlabeled Data with
Self-training Ensembles [38.23896575179384]
We propose a principled and practically effective framework that simultaneously addresses the two tasks.
One instantiation reduces the estimation error for unsupervised accuracy estimation by at least 70% and improves the F1 score for error detection by at least 4.7%.
On iWildCam, one instantiation reduces the estimation error for unsupervised accuracy estimation by at least 70% and improves the F1 score for error detection by at least 4.7%.
arXiv Detail & Related papers (2021-06-29T21:32:51Z) - Tail-to-Tail Non-Autoregressive Sequence Prediction for Chinese
Grammatical Error Correction [49.25830718574892]
We present a new framework named Tail-to-Tail (textbfTtT) non-autoregressive sequence prediction.
Considering that most tokens are correct and can be conveyed directly from source to target, and the error positions can be estimated and corrected.
Experimental results on standard datasets, especially on the variable-length datasets, demonstrate the effectiveness of TtT in terms of sentence-level Accuracy, Precision, Recall, and F1-Measure.
arXiv Detail & Related papers (2021-06-03T05:56:57Z) - On the Robustness of Language Encoders against Grammatical Errors [66.05648604987479]
We collect real grammatical errors from non-native speakers and conduct adversarial attacks to simulate these errors on clean text data.
Results confirm that the performance of all tested models is affected but the degree of impact varies.
arXiv Detail & Related papers (2020-05-12T11:01:44Z) - Correcting the Autocorrect: Context-Aware Typographical Error Correction
via Training Data Augmentation [38.10429793534442]
We first draw on a small set of annotated data to compute spelling error statistics.
These are then invoked to introduce errors into substantially larger corpora.
We use it to create a set of English language error detection and correction datasets.
arXiv Detail & Related papers (2020-05-03T18:08:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.