A BERT-based Unsupervised Grammatical Error Correction Framework
- URL: http://arxiv.org/abs/2303.17367v1
- Date: Thu, 30 Mar 2023 13:29:49 GMT
- Title: A BERT-based Unsupervised Grammatical Error Correction Framework
- Authors: Nankai Lin, Hongbin Zhang, Menglan Shen, Yu Wang, Shengyi Jiang, Aimin
Yang
- Abstract summary: Grammatical error correction (GEC) is a challenging task of natural language processing techniques.
In low-resource languages, the current unsupervised GEC based on language model scoring performs well.
This study proposes a BERT-based unsupervised GEC framework, where GEC is viewed as multi-class classification task.
- Score: 9.431453382607845
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Grammatical error correction (GEC) is a challenging task of natural language
processing techniques. While more attempts are being made in this approach for
universal languages like English or Chinese, relatively little work has been
done for low-resource languages for the lack of large annotated corpora. In
low-resource languages, the current unsupervised GEC based on language model
scoring performs well. However, the pre-trained language model is still to be
explored in this context. This study proposes a BERT-based unsupervised GEC
framework, where GEC is viewed as multi-class classification task. The
framework contains three modules: data flow construction module, sentence
perplexity scoring module, and error detecting and correcting module. We
propose a novel scoring method for pseudo-perplexity to evaluate a sentence's
probable correctness and construct a Tagalog corpus for Tagalog GEC research.
It obtains competitive performance on the Tagalog corpus we construct and
open-source Indonesian corpus and it demonstrates that our framework is
complementary to baseline method for low-resource GEC task.
Related papers
- A Simple Yet Effective Corpus Construction Framework for Indonesian Grammatical Error Correction [7.378963590826542]
We present a framework for constructing GEC corpora in low-resource languages.
Specifically, we focus on Indonesian as our research language.
We construct an evaluation corpus for Indonesian GEC using the proposed framework.
arXiv Detail & Related papers (2024-10-28T08:44:56Z) - Chain-of-Translation Prompting (CoTR): A Novel Prompting Technique for Low Resource Languages [0.4499833362998489]
Chain of Translation Prompting (CoTR) is a novel strategy designed to enhance the performance of language models in low-resource languages.
CoTR restructures prompts to first translate the input context from a low-resource language into a higher-resource language, such as English.
We demonstrate the effectiveness of this method through a case study on the low-resource Indic language Marathi.
arXiv Detail & Related papers (2024-09-06T17:15:17Z) - Contextual Spelling Correction with Language Model for Low-resource Setting [0.0]
A small-scale word-based transformer LM is trained to provide the SC model with contextual understanding.
Probability of error happening(error model) is extracted from the corpus.
Combination of LM and error model is used to develop the SC model through the well-known noisy channel framework.
arXiv Detail & Related papers (2024-04-28T05:29:35Z) - Cross-Lingual NER for Financial Transaction Data in Low-Resource
Languages [70.25418443146435]
We propose an efficient modeling framework for cross-lingual named entity recognition in semi-structured text data.
We employ two independent datasets of SMSs in English and Arabic, each carrying semi-structured banking transaction information.
With access to only 30 labeled samples, our model can generalize the recognition of merchants, amounts, and other fields from English to Arabic.
arXiv Detail & Related papers (2023-07-16T00:45:42Z) - CROP: Zero-shot Cross-lingual Named Entity Recognition with Multilingual
Labeled Sequence Translation [113.99145386490639]
Cross-lingual NER can transfer knowledge between languages via aligned cross-lingual representations or machine translation results.
We propose a Cross-lingual Entity Projection framework (CROP) to enable zero-shot cross-lingual NER.
We adopt a multilingual labeled sequence translation model to project the tagged sequence back to the target language and label the target raw sentence.
arXiv Detail & Related papers (2022-10-13T13:32:36Z) - Improving Pre-trained Language Models with Syntactic Dependency
Prediction Task for Chinese Semantic Error Recognition [52.55136323341319]
Existing Chinese text error detection mainly focuses on spelling and simple grammatical errors.
Chinese semantic errors are understudied and more complex that humans cannot easily recognize.
arXiv Detail & Related papers (2022-04-15T13:55:32Z) - A Unified Strategy for Multilingual Grammatical Error Correction with
Pre-trained Cross-Lingual Language Model [100.67378875773495]
We propose a generic and language-independent strategy for multilingual Grammatical Error Correction.
Our approach creates diverse parallel GEC data without any language-specific operations.
It achieves the state-of-the-art results on the NLPCC 2018 Task 2 dataset (Chinese) and obtains competitive performance on Falko-Merlin (German) and RULEC-GEC (Russian)
arXiv Detail & Related papers (2022-01-26T02:10:32Z) - LM-Critic: Language Models for Unsupervised Grammatical Error Correction [128.9174409251852]
We show how to leverage a pretrained language model (LM) in defining an LM-Critic, which judges a sentence to be grammatical.
We apply this LM-Critic and BIFI along with a large set of unlabeled sentences to bootstrap realistic ungrammatical / grammatical pairs for training a corrector.
arXiv Detail & Related papers (2021-09-14T17:06:43Z) - Continual Mixed-Language Pre-Training for Extremely Low-Resource Neural
Machine Translation [53.22775597051498]
We present a continual pre-training framework on mBART to effectively adapt it to unseen languages.
Results show that our method can consistently improve the fine-tuning performance upon the mBART baseline.
Our approach also boosts the performance on translation pairs where both languages are seen in the original mBART's pre-training.
arXiv Detail & Related papers (2021-05-09T14:49:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.