Team, Then Trim: An Assembly-Line LLM Framework for High-Quality Tabular Data Generation
- URL: http://arxiv.org/abs/2602.04785v1
- Date: Wed, 04 Feb 2026 17:34:41 GMT
- Title: Team, Then Trim: An Assembly-Line LLM Framework for High-Quality Tabular Data Generation
- Authors: Congjing Zhang, Ryan Feng Lin, Ruoxuan Bao, Shuai Huang,
- Abstract summary: This paper introduces Team-then-Trim (T$2$), a framework that synthesizes high-quality data through a collaborative team of LLMs.<n>In T$2$, specialized LLMs, guided by domain knowledge, are tasked with generating different data components sequentially.<n> Empirical results on both simulated and real-world datasets demonstrate that T$2$ outperforms state-of-the-art methods in producing high-quality data.
- Score: 4.818677616222802
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While tabular data is fundamental to many real-world machine learning (ML) applications, acquiring high-quality tabular data is usually labor-intensive and expensive. Limited by the scarcity of observations, tabular datasets often exhibit critical deficiencies, such as class imbalance, selection bias, and low fidelity. To address these challenges, building on recent advances in Large Language Models (LLMs), this paper introduces Team-then-Trim (T$^2$), a framework that synthesizes high-quality tabular data through a collaborative team of LLMs, followed by a rigorous three-stage plug-in data quality control (QC) pipeline. In T$^2$, tabular data generation is conceptualized as a manufacturing process: specialized LLMs, guided by domain knowledge, are tasked with generating different data components sequentially, and the resulting products, i.e., the synthetic data, are systematically evaluated across multiple dimensions of QC. Empirical results on both simulated and real-world datasets demonstrate that T$^2$ outperforms state-of-the-art methods in producing high-quality tabular data, highlighting its potential to support downstream models when direct data collection is practically infeasible.
Related papers
- Follow-Your-Instruction: A Comprehensive MLLM Agent for World Data Synthesis [44.66179436245703]
Follow-Your-Instruction is a framework for automatically synthesizing high-quality 2D, 3D, and 4D data.<n>It constructs 3D layouts, and leverages Vision-Language Models (VLMs) for semantic refinement.<n>We evaluate the quality of the generated data through comprehensive experiments on the 2D, 3D, and 4D generative tasks.
arXiv Detail & Related papers (2025-08-07T17:12:54Z) - FASTGEN: Fast and Cost-Effective Synthetic Tabular Data Generation with LLMs [3.703188184729035]
Synthetic data generation is an invaluable solution in scenarios where real-world data collection and usage are limited by cost and scarcity.<n>Existing approaches that directly use large language models to generate each record individually impose prohibitive time and cost burdens.<n>We propose a fast, cost-effective method for realistic tabular data synthesis that leverages LLMs to infer and encode each field's distribution into a reusable sampling script.
arXiv Detail & Related papers (2025-07-21T17:51:46Z) - LLM-TabLogic: Preserving Inter-Column Logical Relationships in Synthetic Tabular Data via Prompt-Guided Latent Diffusion [49.898152180805454]
Synthetic datasets must maintain domain-specific logical consistency.<n>Existing generative models often overlook these inter-column relationships.<n>This study presents the first method to effectively preserve inter-column relationships without requiring domain knowledge.
arXiv Detail & Related papers (2025-03-04T00:47:52Z) - Evaluating Language Models as Synthetic Data Generators [99.16334775127875]
AgoraBench is a benchmark that provides standardized settings and metrics to evaluate LMs' data generation abilities.<n>Through synthesizing 1.26 million training instances using 6 LMs and training 99 student models, we uncover key insights about LMs' data generation capabilities.
arXiv Detail & Related papers (2024-12-04T19:20:32Z) - Star-Agents: Automatic Data Optimization with LLM Agents for Instruction Tuning [71.2981957820888]
We propose a novel Star-Agents framework, which automates the enhancement of data quality across datasets.
The framework initially generates diverse instruction data with multiple LLM agents through a bespoke sampling method.
The generated data undergo a rigorous evaluation using a dual-model method that assesses both difficulty and quality.
arXiv Detail & Related papers (2024-11-21T02:30:53Z) - Generating Realistic Tabular Data with Large Language Models [49.03536886067729]
Large language models (LLM) have been used for diverse tasks, but do not capture the correct correlation between the features and the target variable.
We propose a LLM-based method with three important improvements to correctly capture the ground-truth feature-class correlation in the real data.
Our experiments show that our method significantly outperforms 10 SOTA baselines on 20 datasets in downstream tasks.
arXiv Detail & Related papers (2024-10-29T04:14:32Z) - Fine-Tuning Language Models on Multiple Datasets for Citation Intention Classification [17.03832781104098]
Citation intention Classification (CIC) tools classify citations by their intention.
Prior research has shown that pretrained language models (PLMs) can achieve state-of-the-art performance on CIC benchmarks.
We propose a multi-task learning framework that jointly fine-tunes PLMs on a dataset of primary interest together with multiple auxiliary CIC datasets.
arXiv Detail & Related papers (2024-10-17T08:45:02Z) - Data Advisor: Dynamic Data Curation for Safety Alignment of Large Language Models [79.65071553905021]
We propose Data Advisor, a method for generating data that takes into account the characteristics of the desired dataset.
Data Advisor monitors the status of the generated data, identifies weaknesses in the current dataset, and advises the next iteration of data generation.
arXiv Detail & Related papers (2024-10-07T17:59:58Z) - Curated LLM: Synergy of LLMs and Data Curation for tabular augmentation in low-data regimes [57.62036621319563]
We introduce CLLM, which leverages the prior knowledge of Large Language Models (LLMs) for data augmentation in the low-data regime.
We demonstrate the superior performance of CLLM in the low-data regime compared to conventional generators.
arXiv Detail & Related papers (2023-12-19T12:34:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.