Towards Universal Debiasing for Language Models-based Tabular Data Generation
- URL: http://arxiv.org/abs/2509.16475v1
- Date: Sat, 20 Sep 2025 00:06:53 GMT
- Title: Towards Universal Debiasing for Language Models-based Tabular Data Generation
- Authors: Tianchun Li, Tianci Liu, Xingchen Wang, Rongzhe Wei, Pan Li, Lu Su, Jing Gao,
- Abstract summary: We introduce a universal debiasing framework that minimizes group-level dependencies by simultaneously reducing the mutual information between advantaged and protected attributes.<n>Our framework effectively balances fairness and utility, offering a scalable and practical solution for debiasing in high-stakes applications.
- Score: 16.31419748401203
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language models (LLMs) have achieved promising results in tabular data generation. However, inherent historical biases in tabular datasets often cause LLMs to exacerbate fairness issues, particularly when multiple advantaged and protected features are involved. In this work, we introduce a universal debiasing framework that minimizes group-level dependencies by simultaneously reducing the mutual information between advantaged and protected attributes. By leveraging the autoregressive structure and analytic sampling distributions of LLM-based tabular data generators, our approach efficiently computes mutual information, reducing the need for cumbersome numerical estimations. Building on this foundation, we propose two complementary methods: a direct preference optimization (DPO)-based strategy, namely UDF-DPO, that integrates seamlessly with existing models, and a targeted debiasing technique, namely UDF-MIX, that achieves debiasing without tuning the parameters of LLMs. Extensive experiments demonstrate that our framework effectively balances fairness and utility, offering a scalable and practical solution for debiasing in high-stakes applications.
Related papers
- Nonparametric LLM Evaluation from Preference Data [86.96268870461472]
We propose a nonparametric statistical framework, DMLEval, for comparing and ranking large language models (LLMs) from preference data.<n>Our framework provides practitioners with powerful, state-of-the-art methods for comparing or ranking LLMs.
arXiv Detail & Related papers (2026-01-29T15:00:07Z) - What Language Models Know But Don't Say: Non-Generative Prior Extraction for Generalization [5.663538370244175]
We propose LoID, a deterministic method for extracting informative prior distributions for Bayesian logistic regression.<n>Rather than relying on generated text, we probe the model's confidence in opposing semantic directions through carefully constructed sentences.<n>We evaluate LoID on ten real-world datasets under synthetic out-of-distribution (OOD) settings.
arXiv Detail & Related papers (2026-01-24T22:05:01Z) - SPaRFT: Self-Paced Reinforcement Fine-Tuning for Large Language Models [51.74498855100541]
Large language models (LLMs) have shown strong reasoning capabilities when fine-tuned with reinforcement learning (RL)<n>We propose textbfSPaRFT, a self-paced learning framework that enables efficient learning based on the capability of the model being trained.
arXiv Detail & Related papers (2025-08-07T03:50:48Z) - In-Context Bias Propagation in LLM-Based Tabular Data Generation [2.182762698614784]
We show that even mild in-context biases lead to global statistical distortions.<n>We introduce an adversarial scenario where a malicious contributor can inject bias into the synthetic dataset.<n>Our findings demonstrate a new vulnerability associated with LLM-based data generation pipelines.
arXiv Detail & Related papers (2025-06-11T11:39:29Z) - A Note on Statistically Accurate Tabular Data Generation Using Large Language Models [0.0]
This work introduces a probability-driven prompting approach that leverages large language models to estimate conditional distributions.<n>Results highlight the potential of prompting probability distributions to enhance the statistical fidelity of large language models-generated data.
arXiv Detail & Related papers (2025-05-05T14:05:15Z) - Information Gain-Guided Causal Intervention for Autonomous Debiasing Large Language Models [40.853803921563596]
Current large language models (LLMs) may still capture dataset biases and utilize them during inference.<n>We propose an information gain-guided causal intervention debiasing framework.<n>ICD can effectively debias LLM to improve its generalizability across different tasks.
arXiv Detail & Related papers (2025-04-17T12:39:25Z) - LLM-TabLogic: Preserving Inter-Column Logical Relationships in Synthetic Tabular Data via Prompt-Guided Latent Diffusion [49.898152180805454]
Synthetic datasets must maintain domain-specific logical consistency.<n>Existing generative models often overlook these inter-column relationships.<n>This study presents the first method to effectively preserve inter-column relationships without requiring domain knowledge.
arXiv Detail & Related papers (2025-03-04T00:47:52Z) - Diversity as a Reward: Fine-Tuning LLMs on a Mixture of Domain-Undetermined Data [54.3895971080712]
Fine-tuning large language models (LLMs) using diverse datasets is crucial for enhancing their overall performance across various domains.<n>We propose a new method that gives the LLM a dual identity: an output model to cognitively probe and select data based on diversity reward, as well as an input model to be tuned with the selected data.
arXiv Detail & Related papers (2025-02-05T17:21:01Z) - Rethinking Relation Extraction: Beyond Shortcuts to Generalization with a Debiased Benchmark [53.876493664396506]
Benchmarks are crucial for evaluating machine learning algorithm performance, facilitating comparison and identifying superior solutions.<n>This paper addresses the issue of entity bias in relation extraction tasks, where models tend to rely on entity mentions rather than context.<n>We propose a debiased relation extraction benchmark DREB that breaks the pseudo-correlation between entity mentions and relation types through entity replacement.<n>To establish a new baseline on DREB, we introduce MixDebias, a debiasing method combining data-level and model training-level techniques.
arXiv Detail & Related papers (2025-01-02T17:01:06Z) - Improving LLM Group Fairness on Tabular Data via In-Context Learning [23.53624663038328]
Large language models (LLMs) can fail to generate predictions that satisfy group fairness, that is, produce equitable outcomes across groups.<n>In this work, we investigate four empirical approaches to improve group fairness.<n>We show the effectiveness of these methods in enhancing demographic parity while maintaining high overall performance.
arXiv Detail & Related papers (2024-12-05T22:23:30Z) - P-TA: Using Proximal Policy Optimization to Enhance Tabular Data Augmentation via Large Language Models [15.969452637480167]
We propose using proximal policy optimization (PPO) to apply Generative Adversarial Networks (GANs)<n>PPO leads to an approximately 4% improvement in the accuracy of models trained on synthetically generated data over state-of-the-art datasets.
arXiv Detail & Related papers (2024-06-17T10:22:00Z) - Spread Preference Annotation: Direct Preference Judgment for Efficient LLM Alignment [72.99676237703099]
We propose a new framework that boosts the alignment of large language models with human preferences.<n>Our key idea is leveraging the human prior knowledge within the small (seed) data.<n>We introduce a noise-aware preference learning algorithm to mitigate the risk of low quality within generated preference data.
arXiv Detail & Related papers (2024-06-06T18:01:02Z) - Self-Augmented Preference Optimization: Off-Policy Paradigms for Language Model Alignment [104.18002641195442]
We introduce Self-Augmented Preference Optimization (SAPO), an effective and scalable training paradigm that does not require existing paired data.
Building on the self-play concept, which autonomously generates negative responses, we further incorporate an off-policy learning pipeline to enhance data exploration and exploitation.
arXiv Detail & Related papers (2024-05-31T14:21:04Z) - ChatGPT Based Data Augmentation for Improved Parameter-Efficient Debiasing of LLMs [65.9625653425636]
Large Language models (LLMs) exhibit harmful social biases.
This work introduces a novel approach utilizing ChatGPT to generate synthetic training data.
arXiv Detail & Related papers (2024-02-19T01:28:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.