Domain-shift Conditioning using Adaptable Filtering via Hierarchical
Embeddings for Robust Chinese Spell Check
- URL: http://arxiv.org/abs/2008.12281v3
- Date: Sat, 22 May 2021 04:22:39 GMT
- Title: Domain-shift Conditioning using Adaptable Filtering via Hierarchical
Embeddings for Robust Chinese Spell Check
- Authors: Minh Nguyen, Gia H. Ngo, Nancy F. Chen
- Abstract summary: Spell check is a useful application which processes noisy human-generated text.
For Chinese spell check, filtering using confusion sets narrows the search space and makes finding corrections easier.
We propose a scalable adaptable filter that exploits hierarchical character embeddings to obviate the need to handcraft confusion sets.
- Score: 29.041134293160255
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spell check is a useful application which processes noisy human-generated
text. Spell check for Chinese poses unresolved problems due to the large number
of characters, the sparse distribution of errors, and the dearth of resources
with sufficient coverage of heterogeneous and shifting error domains. For
Chinese spell check, filtering using confusion sets narrows the search space
and makes finding corrections easier. However, most, if not all, confusion sets
used to date are fixed and thus do not include new, shifting error domains. We
propose a scalable adaptable filter that exploits hierarchical character
embeddings to (1) obviate the need to handcraft confusion sets, and (2) resolve
sparsity problems related to infrequent errors. Our approach compares favorably
with competitive baselines and obtains SOTA results on the 2014 and 2015
Chinese Spelling Check Bake-off datasets.
Related papers
- A Coin Has Two Sides: A Novel Detector-Corrector Framework for Chinese Spelling Correction [79.52464132360618]
Chinese Spelling Correction (CSC) stands as a foundational Natural Language Processing (NLP) task.
We introduce a novel approach based on error detector-corrector framework.
Our detector is designed to yield two error detection results, each characterized by high precision and recall.
arXiv Detail & Related papers (2024-09-06T09:26:45Z) - C-LLM: Learn to Check Chinese Spelling Errors Character by Character [61.53865964535705]
We propose C-LLM, a Large Language Model-based Chinese Spell Checking method that learns to check errors Character by Character.
C-LLM achieves an average improvement of 10% over existing methods.
arXiv Detail & Related papers (2024-06-24T11:16:31Z) - Understanding and Mitigating Classification Errors Through Interpretable
Token Patterns [58.91023283103762]
Characterizing errors in easily interpretable terms gives insight into whether a classifier is prone to making systematic errors.
We propose to discover those patterns of tokens that distinguish correct and erroneous predictions.
We show that our method, Premise, performs well in practice.
arXiv Detail & Related papers (2023-11-18T00:24:26Z) - Chinese Spelling Correction as Rephrasing Language Model [63.65217759957206]
We study Chinese Spelling Correction (CSC), which aims to detect and correct the potential spelling errors in a given sentence.
Current state-of-the-art methods regard CSC as a sequence tagging task and fine-tune BERT-based models on sentence pairs.
We propose Rephrasing Language Model (ReLM), where the model is trained to rephrase the entire sentence by infilling additional slots, instead of character-to-character tagging.
arXiv Detail & Related papers (2023-08-17T06:04:28Z) - Error-Robust Retrieval for Chinese Spelling Check [43.56073620728942]
Chinese Spelling Check (CSC) aims to detect and correct error tokens in Chinese contexts.
Previous methods may not fully leverage the existing datasets.
We introduce our plug-and-play retrieval method with error-robust information for Chinese Spelling Check.
arXiv Detail & Related papers (2022-11-15T01:55:34Z) - Tail-to-Tail Non-Autoregressive Sequence Prediction for Chinese
Grammatical Error Correction [49.25830718574892]
We present a new framework named Tail-to-Tail (textbfTtT) non-autoregressive sequence prediction.
Considering that most tokens are correct and can be conveyed directly from source to target, and the error positions can be estimated and corrected.
Experimental results on standard datasets, especially on the variable-length datasets, demonstrate the effectiveness of TtT in terms of sentence-level Accuracy, Precision, Recall, and F1-Measure.
arXiv Detail & Related papers (2021-06-03T05:56:57Z) - An Alignment-Agnostic Model for Chinese Text Error Correction [17.429266115653007]
This paper investigates how to correct Chinese text errors with types of mistaken, missing and redundant characters.
Most existing models can correct mistaken characters errors, but they cannot deal with missing or redundant characters.
We propose a novel detect-correct framework which is alignment-agnostic, meaning that it can handle both text aligned and non-aligned occasions.
arXiv Detail & Related papers (2021-04-15T01:17:34Z) - Decoding Time Lexical Domain Adaptationfor Neural Machine Translation [7.628949147902029]
Machine translation systems are vulnerable to domain mismatch, especially when the task is low-resource.
We present two simple methods for improving translation quality in this particular setting.
arXiv Detail & Related papers (2021-01-02T11:06:15Z) - Tokenization Repair in the Presence of Spelling Errors [0.2964978357715083]
Spelling errors can be present, but it's not part of the problem to correct them.
We identify three key ingredients of high-quality tokenization repair.
arXiv Detail & Related papers (2020-10-15T16:55:45Z) - Improving the Efficiency of Grammatical Error Correction with Erroneous
Span Detection and Correction [106.63733511672721]
We propose a novel language-independent approach to improve the efficiency for Grammatical Error Correction (GEC) by dividing the task into two subtasks: Erroneous Span Detection ( ESD) and Erroneous Span Correction (ESC)
ESD identifies grammatically incorrect text spans with an efficient sequence tagging model. ESC leverages a seq2seq model to take the sentence with annotated erroneous spans as input and only outputs the corrected text for these spans.
Experiments show our approach performs comparably to conventional seq2seq approaches in both English and Chinese GEC benchmarks with less than 50% time cost for inference.
arXiv Detail & Related papers (2020-10-07T08:29:11Z) - Spelling Error Correction with Soft-Masked BERT [11.122964733563117]
A state-of-the-art method for the task selects a character from a list of candidates for correction at each position of the sentence on the basis of BERT.
The accuracy of the method can be sub-optimal because BERT does not have sufficient capability to detect whether there is an error at each position.
We propose a novel neural architecture to address the aforementioned issue, which consists of a network for error detection and a network for error correction based on BERT.
arXiv Detail & Related papers (2020-05-15T09:02:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.