Interpretability for Language Learners Using Example-Based Grammatical
Error Correction
- URL: http://arxiv.org/abs/2203.07085v1
- Date: Mon, 14 Mar 2022 13:15:00 GMT
- Title: Interpretability for Language Learners Using Example-Based Grammatical
Error Correction
- Authors: Masahiro Kaneko, Sho Takase, Ayana Niwa, Naoaki Okazaki
- Abstract summary: We introduce an Example-Based GEC (EB-GEC) that presents examples to language learners as a basis for a correction result.
Experiments demonstrate that the examples presented by EB-GEC help language learners decide to accept or refuse suggestions from the GEC output.
- Score: 27.850970793739933
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Grammatical Error Correction (GEC) should not focus only on high accuracy of
corrections but also on interpretability for language learning. However,
existing neural-based GEC models mainly aim at improving accuracy, and their
interpretability has not been explored. A promising approach for improving
interpretability is an example-based method, which uses similar retrieved
examples to generate corrections. In addition, examples are beneficial in
language learning, helping learners understand the basis of grammatically
incorrect/correct texts and improve their confidence in writing. Therefore, we
hypothesize that incorporating an example-based method into GEC can improve
interpretability as well as support language learners. In this study, we
introduce an Example-Based GEC (EB-GEC) that presents examples to language
learners as a basis for a correction result. The examples consist of pairs of
correct and incorrect sentences similar to a given input and its predicted
correction. Experiments demonstrate that the examples presented by EB-GEC help
language learners decide to accept or refuse suggestions from the GEC output.
Furthermore, the experiments also show that retrieved examples improve the
accuracy of corrections.
Related papers
- XCB: an effective contextual biasing approach to bias cross-lingual phrases in speech recognition [9.03519622415822]
This study introduces a Cross-lingual Contextual Biasing(XCB) module.
We augment a pre-trained ASR model for the dominant language by integrating an auxiliary language biasing module and a language-specific loss.
Experimental results conducted on our in-house code-switching dataset have validated the efficacy of our approach.
arXiv Detail & Related papers (2024-08-20T04:00:19Z) - EXCGEC: A Benchmark of Edit-wise Explainable Chinese Grammatical Error Correction [21.869368698234247]
This paper introduces the task of EXplainable GEC (EXGEC), which focuses on the integral role of both correction and explanation tasks.
We propose EXCGEC, a tailored benchmark for Chinese EXGEC consisting of 8,216 explanation-augmented samples.
arXiv Detail & Related papers (2024-07-01T03:06:41Z) - Grammatical Error Correction via Mixed-Grained Weighted Training [68.94921674855621]
Grammatical Error Correction (GEC) aims to automatically correct grammatical errors in natural texts.
MainGEC designs token-level and sentence-level training weights based on inherent discrepancies in accuracy and potential diversity of data annotation.
arXiv Detail & Related papers (2023-11-23T08:34:37Z) - Controlled Generation with Prompt Insertion for Natural Language
Explanations in Grammatical Error Correction [50.66922361766939]
It is crucial to ensure the user's comprehension of a reason for correction.
Existing studies present tokens, examples, and hints as to the basis for correction but do not directly explain the reasons for corrections.
Generating explanations for GEC corrections involves aligning input and output tokens, identifying correction points, and presenting corresponding explanations consistently.
This study introduces a method called controlled generation with Prompt Insertion (PI) so that LLMs can explain the reasons for corrections in natural language.
arXiv Detail & Related papers (2023-09-20T16:14:10Z) - Chinese Spelling Correction as Rephrasing Language Model [63.65217759957206]
We study Chinese Spelling Correction (CSC), which aims to detect and correct the potential spelling errors in a given sentence.
Current state-of-the-art methods regard CSC as a sequence tagging task and fine-tune BERT-based models on sentence pairs.
We propose Rephrasing Language Model (ReLM), where the model is trained to rephrase the entire sentence by infilling additional slots, instead of character-to-character tagging.
arXiv Detail & Related papers (2023-08-17T06:04:28Z) - Enhancing Grammatical Error Correction Systems with Explanations [45.69642286275681]
Grammatical error correction systems improve written communication by detecting and correcting language mistakes.
We introduce EXPECT, a dataset annotated with evidence words and grammatical error types.
Human evaluation verifies our explainable GEC system's explanations can assist second-language learners in determining whether to accept a correction suggestion.
arXiv Detail & Related papers (2023-05-25T03:00:49Z) - Improving Few-Shot Performance of Language Models via Nearest Neighbor
Calibration [12.334422701057674]
We propose a novel nearest-neighbor calibration framework for in-context learning.
It is inspired by a phenomenon that the in-context learning paradigm produces incorrect labels when inferring training instances.
Experiments on various few-shot text classification tasks demonstrate that our method significantly improves in-context learning.
arXiv Detail & Related papers (2022-12-05T12:49:41Z) - A Syntax-Guided Grammatical Error Correction Model with Dependency Tree
Correction [83.14159143179269]
Grammatical Error Correction (GEC) is a task of detecting and correcting grammatical errors in sentences.
We propose a syntax-guided GEC model (SG-GEC) which adopts the graph attention mechanism to utilize the syntactic knowledge of dependency trees.
We evaluate our model on public benchmarks of GEC task and it achieves competitive results.
arXiv Detail & Related papers (2021-11-05T07:07:48Z) - Neural Quality Estimation with Multiple Hypotheses for Grammatical Error
Correction [98.31440090585376]
Grammatical Error Correction (GEC) aims to correct writing errors and help language learners improve their writing skills.
Existing GEC models tend to produce spurious corrections or fail to detect lots of errors.
This paper presents the Neural Verification Network (VERNet) for GEC quality estimation with multiple hypotheses.
arXiv Detail & Related papers (2021-05-10T15:04:25Z) - On the Robustness of Language Encoders against Grammatical Errors [66.05648604987479]
We collect real grammatical errors from non-native speakers and conduct adversarial attacks to simulate these errors on clean text data.
Results confirm that the performance of all tested models is affected but the degree of impact varies.
arXiv Detail & Related papers (2020-05-12T11:01:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.