Contextual Similarity is More Valuable than Character Similarity:
Curriculum Learning for Chinese Spell Checking
- URL: http://arxiv.org/abs/2207.09217v1
- Date: Sun, 17 Jul 2022 03:12:27 GMT
- Title: Contextual Similarity is More Valuable than Character Similarity:
Curriculum Learning for Chinese Spell Checking
- Authors: Ding Zhang, Yinghui Li, Qingyu Zhou, Shirong Ma, Yangning Li, Yunbo
Cao, Hai-Tao Zheng
- Abstract summary: Chinese Spell Checking (CSC) task aims to detect and correct Chinese spelling errors.
To make better use of contextual similarity, we propose a simple yet effective curriculum learning framework for the CSC task.
With the help of our designed model-agnostic framework, existing CSC models will be trained from easy to difficult as humans learn Chinese characters.
- Score: 26.93594761258908
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Chinese Spell Checking (CSC) task aims to detect and correct Chinese spelling
errors. In recent years, related researches focus on introducing the character
similarity from confusion set to enhance the CSC models, ignoring the context
of characters that contain richer information. To make better use of contextual
similarity, we propose a simple yet effective curriculum learning framework for
the CSC task. With the help of our designed model-agnostic framework, existing
CSC models will be trained from easy to difficult as humans learn Chinese
characters and achieve further performance improvements. Extensive experiments
and detailed analyses on widely used SIGHAN datasets show that our method
outperforms previous state-of-the-art methods.
Related papers
- DISC: Plug-and-Play Decoding Intervention with Similarity of Characters for Chinese Spelling Check [37.44133266050293]
We propose a light-weight plug-and-play DISC (i.e., decoding intervention with similarity of characters) module for Chinese spelling check (CSC) models.
DISC measures phonetic and glyph similarities between characters and incorporates this similarity information only during the inference phase.
Experiments on three CSC benchmarks demonstrate that our proposed method significantly improves model performance, approaching and even surpassing the current state-of-the-art models.
arXiv Detail & Related papers (2024-12-17T12:44:06Z) - EdaCSC: Two Easy Data Augmentation Methods for Chinese Spelling Correction [0.0]
Chinese Spelling Correction (CSC) aims to detect and correct spelling errors in Chinese sentences caused by phonetic or visual similarities.
We propose two data augmentation methods to address these limitations.
Firstly, we augment the dataset by either splitting long sentences into shorter ones or reducing typos in sentences with multiple typos.
arXiv Detail & Related papers (2024-09-08T14:29:10Z) - C-LLM: Learn to Check Chinese Spelling Errors Character by Character [61.53865964535705]
We propose C-LLM, a Large Language Model-based Chinese Spell Checking method that learns to check errors Character by Character.
C-LLM achieves an average improvement of 10% over existing methods.
arXiv Detail & Related papers (2024-06-24T11:16:31Z) - Chinese Text Recognition with A Pre-Trained CLIP-Like Model Through
Image-IDS Aligning [61.34060587461462]
We propose a two-stage framework for Chinese Text Recognition (CTR)
We pre-train a CLIP-like model through aligning printed character images and Ideographic Description Sequences (IDS)
This pre-training stage simulates humans recognizing Chinese characters and obtains the canonical representation of each character.
The learned representations are employed to supervise the CTR model, such that traditional single-character recognition can be improved to text-line recognition.
arXiv Detail & Related papers (2023-09-03T05:33:16Z) - Chinese Spelling Correction as Rephrasing Language Model [63.65217759957206]
We study Chinese Spelling Correction (CSC), which aims to detect and correct the potential spelling errors in a given sentence.
Current state-of-the-art methods regard CSC as a sequence tagging task and fine-tune BERT-based models on sentence pairs.
We propose Rephrasing Language Model (ReLM), where the model is trained to rephrase the entire sentence by infilling additional slots, instead of character-to-character tagging.
arXiv Detail & Related papers (2023-08-17T06:04:28Z) - CSCD-NS: a Chinese Spelling Check Dataset for Native Speakers [62.61866477815883]
We present CSCD-NS, the first Chinese spelling check dataset designed for native speakers.
CSCD-NS is ten times larger in scale and exhibits a distinct error distribution.
We propose a novel method that simulates the input process through an input method.
arXiv Detail & Related papers (2022-11-16T09:25:42Z) - Error-Robust Retrieval for Chinese Spelling Check [43.56073620728942]
Chinese Spelling Check (CSC) aims to detect and correct error tokens in Chinese contexts.
Previous methods may not fully leverage the existing datasets.
We introduce our plug-and-play retrieval method with error-robust information for Chinese Spelling Check.
arXiv Detail & Related papers (2022-11-15T01:55:34Z) - Improving Chinese Spelling Check by Character Pronunciation Prediction:
The Effects of Adaptivity and Granularity [76.20568599642799]
Chinese spelling check (CSC) is a fundamental NLP task that detects and corrects spelling errors in Chinese texts.
In this paper, we consider introducing an auxiliary task of Chinese pronunciation prediction ( CPP) to improve CSC.
We propose SCOPE which builds on top of a shared encoder two parallel decoders, one for the primary CSC task and the other for a fine-grained auxiliary CPP task.
arXiv Detail & Related papers (2022-10-20T03:42:35Z) - Learning from the Dictionary: Heterogeneous Knowledge Guided Fine-tuning
for Chinese Spell Checking [32.16787396943434]
Chinese Spell Checking (CSC) aims to detect and correct Chinese spelling errors.
Recent researches start from the pretrained knowledge of language models and take multimodal information into CSC models to improve the performance.
We propose the LEAD framework, which renders the CSC model to learn heterogeneous knowledge from the dictionary in terms of phonetics, vision, and meaning.
arXiv Detail & Related papers (2022-10-19T06:31:34Z) - Exploration and Exploitation: Two Ways to Improve Chinese Spelling
Correction Models [51.744357472072416]
We propose a method, which continually identifies the weak spots of a model to generate more valuable training instances.
Experimental results show that such an adversarial training method combined with the pretraining strategy can improve both the generalization and robustness of multiple CSC models.
arXiv Detail & Related papers (2021-05-31T09:17:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.