Generating Faithful Synthetic Data with Large Language Models: A Case
Study in Computational Social Science
- URL: http://arxiv.org/abs/2305.15041v1
- Date: Wed, 24 May 2023 11:27:59 GMT
- Title: Generating Faithful Synthetic Data with Large Language Models: A Case
Study in Computational Social Science
- Authors: Veniamin Veselovsky, Manoel Horta Ribeiro, Akhil Arora, Martin
Josifoski, Ashton Anderson, Robert West
- Abstract summary: We tackle a pervasive problem in synthetic data generation: its generative distribution often differs from the distribution of real-world data researchers care about.
We study three strategies to increase the faithfulness of synthetic data: grounding, filtering, and taxonomy-based generation.
We conclude this paper with some recommendations on how to generate high(er)-fidelity synthetic data for specific tasks.
- Score: 13.854807858791652
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large Language Models (LLMs) have democratized synthetic data generation,
which in turn has the potential to simplify and broaden a wide gamut of NLP
tasks. Here, we tackle a pervasive problem in synthetic data generation: its
generative distribution often differs from the distribution of real-world data
researchers care about (in other words, it is unfaithful). In a case study on
sarcasm detection, we study three strategies to increase the faithfulness of
synthetic data: grounding, filtering, and taxonomy-based generation. We
evaluate these strategies using the performance of classifiers trained with
generated synthetic data on real-world data. While all three strategies improve
the performance of classifiers, we find that grounding works best for the task
at hand. As synthetic data generation plays an ever-increasing role in NLP
research, we expect this work to be a stepping stone in improving its utility.
We conclude this paper with some recommendations on how to generate
high(er)-fidelity synthetic data for specific tasks.
Related papers
- Exploring the Landscape for Generative Sequence Models for Specialized Data Synthesis [0.0]
This paper introduces a novel approach that leverages three generative models of varying complexity to synthesize Malicious Network Traffic.
Our approach transforms numerical data into text, re-framing data generation as a language modeling task.
Our method surpasses state-of-the-art generative models in producing high-fidelity synthetic data.
arXiv Detail & Related papers (2024-11-04T09:51:10Z) - Generating Realistic Tabular Data with Large Language Models [49.03536886067729]
Large language models (LLM) have been used for diverse tasks, but do not capture the correct correlation between the features and the target variable.
We propose a LLM-based method with three important improvements to correctly capture the ground-truth feature-class correlation in the real data.
Our experiments show that our method significantly outperforms 10 SOTA baselines on 20 datasets in downstream tasks.
arXiv Detail & Related papers (2024-10-29T04:14:32Z) - Little Giants: Synthesizing High-Quality Embedding Data at Scale [71.352883755806]
We introduce SPEED, a framework that aligns open-source small models to efficiently generate large-scale embedding data.
SPEED uses only less than 1/10 of the GPT API calls, outperforming the state-of-the-art embedding model E5_mistral when both are trained solely on their synthetic data.
arXiv Detail & Related papers (2024-10-24T10:47:30Z) - Synthetic Oversampling: Theory and A Practical Approach Using LLMs to Address Data Imbalance [16.047084318753377]
Imbalanced data and spurious correlations are common challenges in machine learning and data science.
Oversampling, which artificially increases the number of instances in the underrepresented classes, has been widely adopted to tackle these challenges.
We introduce OPAL, a systematic oversampling approach that leverages the capabilities of large language models to generate high-quality synthetic data for minority groups.
arXiv Detail & Related papers (2024-06-05T21:24:26Z) - When AI Eats Itself: On the Caveats of Data Pollution in the Era of Generative AI [18.641925577551557]
Generative artificial intelligence (AI) technologies and large models are producing realistic outputs across various domains, such as images, text, speech, and music.
To minimize training expenses, many algorithm developers use data created by the models themselves as a cost-effective training solution.
Not all synthetic data effectively improve model performance, necessitating a strategic balance in the use of real versus synthetic data to optimize outcomes.
arXiv Detail & Related papers (2024-05-15T13:50:23Z) - Best Practices and Lessons Learned on Synthetic Data [83.63271573197026]
The success of AI models relies on the availability of large, diverse, and high-quality datasets.
Synthetic data has emerged as a promising solution by generating artificial data that mimics real-world patterns.
arXiv Detail & Related papers (2024-04-11T06:34:17Z) - Reimagining Synthetic Tabular Data Generation through Data-Centric AI: A
Comprehensive Benchmark [56.8042116967334]
Synthetic data serves as an alternative in training machine learning models.
ensuring that synthetic data mirrors the complex nuances of real-world data is a challenging task.
This paper explores the potential of integrating data-centric AI techniques to guide the synthetic data generation process.
arXiv Detail & Related papers (2023-10-25T20:32:02Z) - Let's Synthesize Step by Step: Iterative Dataset Synthesis with Large
Language Models by Extrapolating Errors from Small Models [69.76066070227452]
*Data Synthesis* is a promising way to train a small model with very little labeled data.
We propose *Synthesis Step by Step* (**S3**), a data synthesis framework that shrinks this distribution gap.
Our approach improves the performance of a small model by reducing the gap between the synthetic dataset and the real data.
arXiv Detail & Related papers (2023-10-20T17:14:25Z) - Synthetic Demographic Data Generation for Card Fraud Detection Using
GANs [4.651915393462367]
We build a deep-learning Generative Adversarial Network (GAN), called DGGAN, which will be used for demographic data generation.
Our model generates samples during model training, which we found important to overcame class imbalance issues.
arXiv Detail & Related papers (2023-06-29T17:08:57Z) - Synthetic data, real errors: how (not) to publish and use synthetic data [86.65594304109567]
We show how the generative process affects the downstream ML task.
We introduce Deep Generative Ensemble (DGE) to approximate the posterior distribution over the generative process model parameters.
arXiv Detail & Related papers (2023-05-16T07:30:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.