SPARQ: Synthetic Problem Generation for Reasoning via Quality-Diversity Algorithms
- URL: http://arxiv.org/abs/2506.06499v2
- Date: Tue, 17 Jun 2025 12:55:30 GMT
- Title: SPARQ: Synthetic Problem Generation for Reasoning via Quality-Diversity Algorithms
- Authors: Alex Havrilla, Edward Hughes, Mikayel Samvelyan, Jacob Abernethy,
- Abstract summary: We present SPARQ: Synthetic Problem Generation for Reasoning via Quality-Diversity Algorithms.<n>We generate high-quality and diverse synthetic math problem and solution pairs using only a single model.<n>We show that filtering the generated data by difficulty and then fine-tuning the same model improves relative model performance by up to 24%.
- Score: 4.569384312180347
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language model (LLM) driven synthetic data generation has emerged as a powerful method for improving model reasoning capabilities. However, most methods either distill large state-of-the-art models into small students or use natural ground-truth problem statements to guarantee problem statement quality. This limits the scalability of these approaches to more complex and diverse problem domains. To address this, we present SPARQ: Synthetic Problem Generation for Reasoning via Quality-Diversity Algorithms, a novel approach for generating high-quality and diverse synthetic math problem and solution pairs using only a single model by measuring a problem's solve-rate: a proxy for problem difficulty. Starting from a seed dataset of 7.5K samples, we generate over 20 million new problem-solution pairs. We show that filtering the generated data by difficulty and then fine-tuning the same model on the resulting data improves relative model performance by up to 24\%. Additionally, we conduct ablations studying the impact of synthetic data quantity, quality and diversity on model generalization. We find that higher quality, as measured by problem difficulty, facilitates better in-distribution performance. Further, while generating diverse synthetic data does not as strongly benefit in-distribution performance, filtering for more diverse data facilitates more robust OOD generalization. We also confirm the existence of model and data scaling laws for synthetically generated problems, which positively benefit downstream model generalization.
Related papers
- Learning to Pose Problems: Reasoning-Driven and Solver-Adaptive Data Synthesis for Large Reasoning Models [54.29243291958429]
We develop a problem generator that reasons explicitly to plan problem directions before synthesis.<n>We treat the solver's feedback on synthetic problems as a reward signal, enabling the generator to calibrate difficulty.<n>Our method achieves an average improvement of 2.5% and generalizes to both language and vision-language models.
arXiv Detail & Related papers (2025-11-13T03:08:51Z) - QueST: Incentivizing LLMs to Generate Difficult Problems [77.75835742350644]
Large Language Models have achieved strong performance on reasoning tasks, solving competition-level coding and math problems.<n>Existing competitive coding datasets contain only thousands to tens of thousands of problems.<n>We propose QueST, a novel framework which combines difficulty-aware graph sampling and difficulty-aware rejection fine-tuning.
arXiv Detail & Related papers (2025-10-20T16:29:53Z) - An adaptive sampling algorithm for data-generation to build a data-manifold for physical problem surrogate modeling [0.7499722271664147]
We present an Adaptive Sampling Algorithm for Data Generation (ASADG) involving a physical model.<n>We demonstrate the efficiency of the data sampling algorithm in comparison with LHS method for generating more representative input data.
arXiv Detail & Related papers (2025-05-13T12:17:10Z) - OptMATH: A Scalable Bidirectional Data Synthesis Framework for Optimization Modeling [9.617742955894247]
Lack of high-quality optimization modeling datasets hampers large language models.<n>We propose a scalable framework for synthesizing a high-quality dataset, named OptMATH.<n>We demonstrate that models of various sizes trained on OptMATH achieve superior results on multiple modeling benchmarks.
arXiv Detail & Related papers (2025-02-16T12:38:37Z) - BARE: Leveraging Base Language Models for Few-Shot Synthetic Data Generation [71.46236155101032]
Current data generation methods rely on seed sets containing tens of thousands of examples to prompt instruction-tuned models.<n>We show that when working with only a few seed examples, instruction-tuned models produce insufficient diversity for downstream tasks.<n>We propose Base-Refine, a novel two-stage method that combines the diversity of base models with the quality assurance of instruction-tuned models.
arXiv Detail & Related papers (2025-02-03T00:12:40Z) - Evaluating Language Models as Synthetic Data Generators [74.80905172696366]
AgoraBench is a benchmark that provides standardized settings and metrics to evaluate LMs' data generation abilities.<n>Through synthesizing 1.26 million training instances using 6 LMs and training 99 student models, we uncover key insights about LMs' data generation capabilities.
arXiv Detail & Related papers (2024-12-04T19:20:32Z) - Surveying the Effects of Quality, Diversity, and Complexity in Synthetic Data From Large Language Models [12.85318938363753]
We evaluate algorithms via the makeup of synthetic data generated by each algorithm in terms of data quality, diversity, and complexity.<n>We examine the effect of various components in the synthetic data pipeline on each data characteristic.<n>We argue that balancing these trade-offs is essential to the development of future self-improvement algorithms.
arXiv Detail & Related papers (2024-12-04T02:47:45Z) - Unleashing LLM Reasoning Capability via Scalable Question Synthesis from Scratch [54.12139707822201]
We propose ScaleQuest, a novel, scalable, and cost-effective data synthesis method.<n>By generating diverse questions from scratch, we produce a dataset of 1 million problem-solution pairs.<n>Our experiments demonstrate that models trained on our data outperform existing open-source datasets.
arXiv Detail & Related papers (2024-10-24T12:42:04Z) - Unveiling the Flaws: Exploring Imperfections in Synthetic Data and Mitigation Strategies for Large Language Models [89.88010750772413]
Synthetic data has been proposed as a solution to address the issue of high-quality data scarcity in the training of large language models (LLMs)
Our work delves into these specific flaws associated with question-answer (Q-A) pairs, a prevalent type of synthetic data, and presents a method based on unlearning techniques to mitigate these flaws.
Our work has yielded key insights into the effective use of synthetic data, aiming to promote more robust and efficient LLM training.
arXiv Detail & Related papers (2024-06-18T08:38:59Z) - Synthetic Oversampling: Theory and A Practical Approach Using LLMs to Address Data Imbalance [16.047084318753377]
Imbalanced classification and spurious correlation are common challenges in data science and machine learning.<n>Recent advances have proposed leveraging the flexibility and generative capabilities of large language models to generate synthetic samples.<n>This article develops novel theoretical foundations to systematically study the roles of synthetic samples in addressing imbalanced classification and spurious correlation.
arXiv Detail & Related papers (2024-06-05T21:24:26Z) - Let's Synthesize Step by Step: Iterative Dataset Synthesis with Large
Language Models by Extrapolating Errors from Small Models [69.76066070227452]
*Data Synthesis* is a promising way to train a small model with very little labeled data.
We propose *Synthesis Step by Step* (**S3**), a data synthesis framework that shrinks this distribution gap.
Our approach improves the performance of a small model by reducing the gap between the synthetic dataset and the real data.
arXiv Detail & Related papers (2023-10-20T17:14:25Z) - Synthetic data, real errors: how (not) to publish and use synthetic data [86.65594304109567]
We show how the generative process affects the downstream ML task.
We introduce Deep Generative Ensemble (DGE) to approximate the posterior distribution over the generative process model parameters.
arXiv Detail & Related papers (2023-05-16T07:30:29Z) - Improving Question Answering Model Robustness with Synthetic Adversarial
Data Generation [41.9785159975426]
State-of-the-art question answering models remain susceptible to a variety of adversarial attacks and are still far from obtaining human-level language understanding.
One proposed way forward is dynamic adversarial data collection, in which a human annotator attempts to create examples for which a model-in-the-loop fails.
In this work, we investigate several answer selection, question generation, and filtering methods that form a synthetic adversarial data generation pipeline.
Models trained on both synthetic and human-generated data outperform models not trained on synthetic adversarial data, and obtain state-of-the-art results on the Adversarial
arXiv Detail & Related papers (2021-04-18T02:00:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.