Using GPT-2 to Create Synthetic Data to Improve the Prediction
Performance of NLP Machine Learning Classification Models
- URL: http://arxiv.org/abs/2104.10658v1
- Date: Fri, 2 Apr 2021 20:20:42 GMT
- Title: Using GPT-2 to Create Synthetic Data to Improve the Prediction
Performance of NLP Machine Learning Classification Models
- Authors: Dewayne Whitfield
- Abstract summary: It is becoming common practice to utilize synthetic data to boost the performance of Machine Learning Models.
I used a Yelp pizza restaurant reviews dataset and transfer learning to fine-tune a pre-trained GPT-2 Transformer Model to generate synthetic pizza reviews data.
I then combined this synthetic data with the original genuine data to create a new joint dataset.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Classification Models use input data to predict the likelihood that the
subsequent input data will fall into predetermined categories. To perform
effective classifications, these models require large datasets for training. It
is becoming common practice to utilize synthetic data to boost the performance
of Machine Learning Models. It is reported that Shell is using synthetic data
to build models to detect problems that rarely occur; for example Shell created
synthetic data to help models to identify deteriorating oil lines. It is common
practice for Machine Learning Practitioners to generate synthetic data by
rotating, flipping, and cropping images to increase the volume of image data to
train Convolutional Neural Networks. The purpose of this paper is to explore
creating and utilizing synthetic NLP data to improve the performance of Natural
Language Processing Machine Learning Classification Models. In this paper I
used a Yelp pizza restaurant reviews dataset and transfer learning to fine-tune
a pre-trained GPT-2 Transformer Model to generate synthetic pizza reviews data.
I then combined this synthetic data with the original genuine data to create a
new joint dataset. The new combined model significantly outperformed the
original model in accuracy and precision.
Related papers
- Little Giants: Synthesizing High-Quality Embedding Data at Scale [71.352883755806]
We introduce SPEED, a framework that aligns open-source small models to efficiently generate large-scale embedding data.
SPEED uses only less than 1/10 of the GPT API calls, outperforming the state-of-the-art embedding model E5_mistral when both are trained solely on their synthetic data.
arXiv Detail & Related papers (2024-10-24T10:47:30Z) - Forewarned is Forearmed: Leveraging LLMs for Data Synthesis through Failure-Inducing Exploration [90.41908331897639]
Large language models (LLMs) have significantly benefited from training on diverse, high-quality task-specific data.
We present a novel approach, ReverseGen, designed to automatically generate effective training samples.
arXiv Detail & Related papers (2024-10-22T06:43:28Z) - Machine Unlearning using a Multi-GAN based Model [0.0]
This article presents a new machine unlearning approach that utilizes multiple Generative Adversarial Network (GAN) based models.
The proposed method comprises two phases: i) data reorganization in which synthetic data using the GAN model is introduced with inverted class labels of the forget datasets, and ii) fine-tuning the pre-trained model.
arXiv Detail & Related papers (2024-07-26T02:28:32Z) - Learning Defect Prediction from Unrealistic Data [57.53586547895278]
Pretrained models of code have become popular choices for code understanding and generation tasks.
Such models tend to be large and require commensurate volumes of training data.
It has become popular to train models with far larger but less realistic datasets, such as functions with artificially injected bugs.
Models trained on such data tend to only perform well on similar data, while underperforming on real world programs.
arXiv Detail & Related papers (2023-11-02T01:51:43Z) - Let's Synthesize Step by Step: Iterative Dataset Synthesis with Large
Language Models by Extrapolating Errors from Small Models [69.76066070227452]
*Data Synthesis* is a promising way to train a small model with very little labeled data.
We propose *Synthesis Step by Step* (**S3**), a data synthesis framework that shrinks this distribution gap.
Our approach improves the performance of a small model by reducing the gap between the synthetic dataset and the real data.
arXiv Detail & Related papers (2023-10-20T17:14:25Z) - Synthetic data, real errors: how (not) to publish and use synthetic data [86.65594304109567]
We show how the generative process affects the downstream ML task.
We introduce Deep Generative Ensemble (DGE) to approximate the posterior distribution over the generative process model parameters.
arXiv Detail & Related papers (2023-05-16T07:30:29Z) - A New Benchmark: On the Utility of Synthetic Data with Blender for Bare
Supervised Learning and Downstream Domain Adaptation [42.2398858786125]
Deep learning in computer vision has achieved great success with the price of large-scale labeled training data.
The uncontrollable data collection process produces non-IID training and test data, where undesired duplication may exist.
To circumvent them, an alternative is to generate synthetic data via 3D rendering with domain randomization.
arXiv Detail & Related papers (2023-03-16T09:03:52Z) - Is synthetic data from generative models ready for image recognition? [69.42645602062024]
We study whether and how synthetic images generated from state-of-the-art text-to-image generation models can be used for image recognition tasks.
We showcase the powerfulness and shortcomings of synthetic data from existing generative models, and propose strategies for better applying synthetic data for recognition tasks.
arXiv Detail & Related papers (2022-10-14T06:54:24Z) - FedSynth: Gradient Compression via Synthetic Data in Federated Learning [14.87215762562876]
We propose a new scheme for upstream communication where instead of transmitting the model update, each client learns and transmits a light-weight synthetic dataset.
We find our method is comparable/better than random masking baselines in all three common federated learning benchmark datasets.
arXiv Detail & Related papers (2022-04-04T06:47:20Z) - TUTOR: Training Neural Networks Using Decision Rules as Model Priors [4.0880509203447595]
Deep neural networks (DNNs) generally need large amounts of data and computational resources for training.
We propose the TUTOR framework to synthesize accurate DNN models with limited available data and reduced memory/computational requirements.
We show that in comparison to fully connected DNNs, TUTOR, on an average, reduces the need for data by 5.9x, improves accuracy by 3.4%, and reduces the number of parameters (fFLOPs) by 4.7x (4.3x)
arXiv Detail & Related papers (2020-10-12T03:25:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.