GrammarGPT: Exploring Open-Source LLMs for Native Chinese Grammatical
Error Correction with Supervised Fine-Tuning
- URL: http://arxiv.org/abs/2307.13923v2
- Date: Thu, 17 Aug 2023 19:58:42 GMT
- Title: GrammarGPT: Exploring Open-Source LLMs for Native Chinese Grammatical
Error Correction with Supervised Fine-Tuning
- Authors: Yaxin Fan, Feng Jiang, Peifeng Li, and Haizhou Li
- Abstract summary: We introduce GrammarGPT, an open-source Large Language Model, to explore its potential for native Chinese grammatical error correction.
For grammatical errors with clues, we proposed a method to guide ChatGPT to generate ungrammatical sentences by providing those clues.
For grammatical errors without clues, we collected ungrammatical sentences from publicly available websites and manually corrected them.
- Score: 46.75740002185691
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Grammatical error correction aims to correct ungrammatical sentences
automatically. Recently, some work has demonstrated the excellent capabilities
of closed-source Large Language Models (LLMs, e.g., ChatGPT) in grammatical
error correction. However, the potential of open-source LLMs remains
unexplored. In this paper, we introduced GrammarGPT, an open-source LLM, to
preliminary explore its potential for native Chinese grammatical error
correction. The core recipe of GrammarGPT is to leverage the hybrid dataset of
ChatGPT-generated and human-annotated. For grammatical errors with clues, we
proposed a heuristic method to guide ChatGPT to generate ungrammatical
sentences by providing those clues. For grammatical errors without clues, we
collected ungrammatical sentences from publicly available websites and manually
corrected them. In addition, we employed an error-invariant augmentation method
to enhance the ability of the model to correct native Chinese grammatical
errors. We ultimately constructed about 1k parallel data and utilized these
data to fine-tune open-source LLMs (e.g., Phoenix, released by The Chinese
University of Hong Kong, Shenzhen) with instruction tuning. The experimental
results show that GrammarGPT outperforms the existing SOTA system
significantly. Although model parameters are 20x larger than the SOTA baseline,
the required amount of data for instruction tuning is 1200x smaller,
illustrating the potential of open-source LLMs on native CGEC. Our GrammarGPT
ranks $3^{rd}$ on NLPCC2023 SharedTask1, demonstrating our approach's
effectiveness. The code and data are available at
\url{https://github.com/FreedomIntelligence/GrammarGPT}.
Related papers
- Can LLMs Really Learn to Translate a Low-Resource Language from One Grammar Book? [6.905647501099997]
Extremely low-resource (XLR) languages lack substantial corpora for training NLP models.
Machine Translation from One Book suggests prompting long-context LLMs with one grammar book enables English-Kalamang translation.
We investigate whether the book's grammatical explanations or its parallel examples are most effective for learning XLR translation.
arXiv Detail & Related papers (2024-09-27T21:27:32Z) - How Ready Are Generative Pre-trained Large Language Models for Explaining Bengali Grammatical Errors? [0.4857223913212445]
Grammatical error correction (GEC) tools, powered by advanced generative artificial intelligence (AI), competently correct linguistic inaccuracies in user input.
However, they often fall short in providing essential natural language explanations.
In such languages, grammatical error explanation (GEE) systems should not only correct sentences but also provide explanations for errors.
arXiv Detail & Related papers (2024-05-27T15:56:45Z) - Prompting open-source and commercial language models for grammatical
error correction of English learner text [19.192210777082053]
Large language models (LLMs) can be prompt to produce texts which are fluent and grammatical.
We evaluate how well LLMs can perform at grammatical error correction (GEC) by measuring their performance on established benchmark datasets.
We find that several open-source models outperform commercial ones on minimal edit benchmarks, and that in some settings zero-shot prompting is just as competitive as few-shot prompting.
arXiv Detail & Related papers (2024-01-15T14:19:47Z) - Native Language Identification with Large Language Models [60.80452362519818]
We show that GPT models are proficient at NLI classification, with GPT-4 setting a new performance record of 91.7% on the benchmark11 test set in a zero-shot setting.
We also show that unlike previous fully-supervised settings, LLMs can perform NLI without being limited to a set of known classes.
arXiv Detail & Related papers (2023-12-13T00:52:15Z) - GEE! Grammar Error Explanation with Large Language Models [64.16199533560017]
We propose the task of grammar error explanation, where a system needs to provide one-sentence explanations for each grammatical error in a pair of erroneous and corrected sentences.
We analyze the capability of GPT-4 in grammar error explanation, and find that it only produces explanations for 60.2% of the errors using one-shot prompting.
We develop a two-step pipeline that leverages fine-tuned and prompted large language models to perform structured atomic token edit extraction.
arXiv Detail & Related papers (2023-11-16T02:45:47Z) - TIM: Teaching Large Language Models to Translate with Comparison [78.66926087162672]
We propose a novel framework using examples in comparison to teach LLMs to learn translation.
Our approach involves presenting the model with examples of correct and incorrect translations and using a preference loss to guide the model's learning.
Our findings offer a new perspective on fine-tuning LLMs for translation tasks and provide a promising solution for generating high-quality translations.
arXiv Detail & Related papers (2023-07-10T08:15:40Z) - A Syntax-Guided Grammatical Error Correction Model with Dependency Tree
Correction [83.14159143179269]
Grammatical Error Correction (GEC) is a task of detecting and correcting grammatical errors in sentences.
We propose a syntax-guided GEC model (SG-GEC) which adopts the graph attention mechanism to utilize the syntactic knowledge of dependency trees.
We evaluate our model on public benchmarks of GEC task and it achieves competitive results.
arXiv Detail & Related papers (2021-11-05T07:07:48Z) - LM-Critic: Language Models for Unsupervised Grammatical Error Correction [128.9174409251852]
We show how to leverage a pretrained language model (LM) in defining an LM-Critic, which judges a sentence to be grammatical.
We apply this LM-Critic and BIFI along with a large set of unlabeled sentences to bootstrap realistic ungrammatical / grammatical pairs for training a corrector.
arXiv Detail & Related papers (2021-09-14T17:06:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.