Recent Trends in the Use of Deep Learning Models for Grammar Error
Handling
- URL: http://arxiv.org/abs/2009.02358v1
- Date: Fri, 4 Sep 2020 18:50:13 GMT
- Title: Recent Trends in the Use of Deep Learning Models for Grammar Error
Handling
- Authors: Mina Naghshnejad, Tarun Joshi, and Vijayan N. Nair
- Abstract summary: Grammar error handling (GEH) is an important topic in natural language processing (NLP)
Recent advances in computation systems have promoted the use of deep learning (DL) models for NLP problems such as GEH.
- Score: 6.88204255655161
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Grammar error handling (GEH) is an important topic in natural language
processing (NLP). GEH includes both grammar error detection and grammar error
correction. Recent advances in computation systems have promoted the use of
deep learning (DL) models for NLP problems such as GEH. In this survey we focus
on two main DL approaches for GEH: neural machine translation models and editor
models. We describe the three main stages of the pipeline for these models:
data preparation, training, and inference. Additionally, we discuss different
techniques to improve the performance of these models at each stage of the
pipeline. We compare the performance of different models and conclude with
proposed future directions.
Related papers
- Predictor-Corrector Enhanced Transformers with Exponential Moving Average Coefficient Learning [73.73967342609603]
We introduce a predictor-corrector learning framework to minimize truncation errors.
We also propose an exponential moving average-based coefficient learning method to strengthen our higher-order predictor.
Our model surpasses a robust 3.8B DeepNet by an average of 2.9 SacreBLEU, using only 1/3 parameters.
arXiv Detail & Related papers (2024-11-05T12:26:25Z) - Combining Denoising Autoencoders with Contrastive Learning to fine-tune Transformer Models [0.0]
This work proposes a 3 Phase technique to adjust a base model for a classification task.
We adapt the model's signal to the data distribution by performing further training with a Denoising Autoencoder (DAE)
In addition, we introduce a new data augmentation approach for Supervised Contrastive Learning to correct the unbalanced datasets.
arXiv Detail & Related papers (2024-05-23T11:08:35Z) - Does Correction Remain A Problem For Large Language Models? [63.24433996856764]
This paper investigates the role of correction in the context of large language models by conducting two experiments.
The first experiment focuses on correction as a standalone task, employing few-shot learning techniques with GPT-like models for error correction.
The second experiment explores the notion of correction as a preparatory task for other NLP tasks, examining whether large language models can tolerate and perform adequately on texts containing certain levels of noise or errors.
arXiv Detail & Related papers (2023-08-03T14:09:31Z) - Should We Attend More or Less? Modulating Attention for Fairness [11.91250446389124]
We study the role of attention, a widely-used technique in current state-of-the-art NLP models, in the propagation of social biases.
We propose a novel method for modulating attention weights to improve model fairness after training.
Our results show an increase in fairness and minimal performance loss on different text classification and generation tasks.
arXiv Detail & Related papers (2023-05-22T14:54:21Z) - An Exploration of Prompt Tuning on Generative Spoken Language Model for
Speech Processing Tasks [112.1942546460814]
We report the first exploration of the prompt tuning paradigm for speech processing tasks based on Generative Spoken Language Model (GSLM)
Experiment results show that the prompt tuning technique achieves competitive performance in speech classification tasks with fewer trainable parameters than fine-tuning specialized downstream models.
arXiv Detail & Related papers (2022-03-31T03:26:55Z) - Type-Driven Multi-Turn Corrections for Grammatical Error Correction [46.34114495164071]
Grammatical Error Correction (GEC) aims to automatically detect and correct grammatical errors.
Previous studies mainly focus on the data augmentation approach to combat the exposure bias.
We propose a Type-Driven Multi-Turn Corrections approach for GEC.
arXiv Detail & Related papers (2022-03-17T07:30:05Z) - Recent Advances in Natural Language Processing via Large Pre-Trained
Language Models: A Survey [67.82942975834924]
Large, pre-trained language models such as BERT have drastically changed the Natural Language Processing (NLP) field.
We present a survey of recent work that uses these large language models to solve NLP tasks via pre-training then fine-tuning, prompting, or text generation approaches.
arXiv Detail & Related papers (2021-11-01T20:08:05Z) - Layer-wise Analysis of a Self-supervised Speech Representation Model [26.727775920272205]
Self-supervised learning approaches have been successful for pre-training speech representation models.
Not much has been studied about the type or extent of information encoded in the pre-trained representations themselves.
arXiv Detail & Related papers (2021-07-10T02:13:25Z) - Unsupervised Paraphrasing with Pretrained Language Models [85.03373221588707]
We propose a training pipeline that enables pre-trained language models to generate high-quality paraphrases in an unsupervised setting.
Our recipe consists of task-adaptation, self-supervision, and a novel decoding algorithm named Dynamic Blocking.
We show with automatic and human evaluations that our approach achieves state-of-the-art performance on both the Quora Question Pair and the ParaNMT datasets.
arXiv Detail & Related papers (2020-10-24T11:55:28Z) - Data Augmentation for Spoken Language Understanding via Pretrained
Language Models [113.56329266325902]
Training of spoken language understanding (SLU) models often faces the problem of data scarcity.
We put forward a data augmentation method using pretrained language models to boost the variability and accuracy of generated utterances.
arXiv Detail & Related papers (2020-04-29T04:07:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.