Lightweight Safety Guardrails via Synthetic Data and RL-guided Adversarial Training
- URL: http://arxiv.org/abs/2507.08284v1
- Date: Fri, 11 Jul 2025 03:17:58 GMT
- Title: Lightweight Safety Guardrails via Synthetic Data and RL-guided Adversarial Training
- Authors: Aleksei Ilin, Gor Matevosyan, Xueying Ma, Vladimir Eremin, Suhaa Dada, Muqun Li, Riyaaz Shaik, Haluk Noyan Tokgozoglu,
- Abstract summary: Small-scale language models can achieve, and even surpass, the performance of larger counterparts in content moderation tasks.<n>This is accomplished through high-fidelity synthetic data generation and adversarial training.
- Score: 0.1533068702686808
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce a lightweight yet highly effective safety guardrail framework for language models, demonstrating that small-scale language models can achieve, and even surpass, the performance of larger counterparts in content moderation tasks. This is accomplished through high-fidelity synthetic data generation and adversarial training. The synthetic data generation process begins with human-curated seed data, which undergoes query augmentation and paraphrasing to create diverse and contextually rich examples. This augmented data is then subjected to multiple rounds of curation, ensuring high fidelity and relevance. Inspired by recent advances in the Generative Adversarial Network (GAN) architecture, our adversarial training employs reinforcement learning to guide a generator that produces challenging synthetic examples. These examples are used to fine-tune the safety classifier, enhancing its ability to detect and mitigate harmful content. Additionally, we incorporate strategies from recent research on efficient LLM training, leveraging the capabilities of smaller models to improve the performance of larger generative models. With iterative adversarial training and the generation of diverse, high-quality synthetic data, our framework enables small language models (SLMs) to serve as robust safety guardrails. This approach not only reduces computational overhead but also enhances resilience against adversarial attacks, offering a scalable and efficient solution for content moderation in AI systems.
Related papers
- PoisonSwarm: Universal Harmful Information Synthesis via Model Crowdsourcing [7.760708840164335]
We propose PoisonSwarm, which applies the model crowdsourcing strategy to generate diverse harmful data.<n>We decompose each based template into multiple semantic units and perform unit-by-unit toxification.<n>Experiments demonstrate that PoisonSwarm achieves state-of-the-art performance in synthesizing different categories of harmful data.
arXiv Detail & Related papers (2025-05-27T13:33:57Z) - SMOTExT: SMOTE meets Large Language Models [19.394116388173885]
We propose a novel technique, SMOTExT, that adapts the idea of Synthetic Minority Over-sampling (SMOTE) to textual data.<n>Our method generates new synthetic examples by interpolating between BERT-based embeddings of two existing examples.<n>In early experiments, training models solely on generated data achieved comparable performance to models trained on the original dataset.
arXiv Detail & Related papers (2025-05-19T17:57:36Z) - Less is More: Adaptive Coverage for Synthetic Training Data [20.136698279893857]
This study introduces a novel sampling algorithm, based on the maximum coverage problem, to select a representative subset from a synthetically generated dataset.<n>Our results demonstrate that training a classifier on this contextually sampled subset achieves superior performance compared to training on the entire dataset.
arXiv Detail & Related papers (2025-04-20T06:45:16Z) - Exploring Training and Inference Scaling Laws in Generative Retrieval [50.82554729023865]
Generative retrieval reformulates retrieval as an autoregressive generation task, where large language models generate target documents directly from a query.<n>We systematically investigate training and inference scaling laws in generative retrieval, exploring how model size, training data scale, and inference-time compute jointly influence performance.
arXiv Detail & Related papers (2025-03-24T17:59:03Z) - Forewarned is Forearmed: Leveraging LLMs for Data Synthesis through Failure-Inducing Exploration [90.41908331897639]
Large language models (LLMs) have significantly benefited from training on diverse, high-quality task-specific data.
We present a novel approach, ReverseGen, designed to automatically generate effective training samples.
arXiv Detail & Related papers (2024-10-22T06:43:28Z) - Synthetic Network Traffic Data Generation: A Comparative Study [0.0]
Existing methods for synthetic data generation differ significantly in their ability to maintain statistical fidelity, utility for classification tasks, and class balance.<n>This study presents a comparative analysis of twelve synthetic network traffic data generation methods, encompassing non-AI (statistical), classical AI, and generative AI techniques.<n>Results demonstrate that GAN-based models, particularly CTGAN and CopulaGAN, achieve superior fidelity and utility, making them ideal for high-quality synthetic data generation.
arXiv Detail & Related papers (2024-10-18T14:19:25Z) - Enhancing Adversarial Robustness through Multi-Objective Representation Learning [1.534667887016089]
Deep neural networks (DNNs) are vulnerable to small adversarial perturbations.<n>We show that robust feature learning during training can significantly enhance robustness.<n>We propose MOREL, a multi-objective approach that aligns natural and adversarial features.
arXiv Detail & Related papers (2024-10-02T16:05:03Z) - Adversarial Robustification via Text-to-Image Diffusion Models [56.37291240867549]
Adrial robustness has been conventionally believed as a challenging property to encode for neural networks.
We develop a scalable and model-agnostic solution to achieve adversarial robustness without using any data.
arXiv Detail & Related papers (2024-07-26T10:49:14Z) - RigorLLM: Resilient Guardrails for Large Language Models against Undesired Content [62.685566387625975]
Current mitigation strategies, while effective, are not resilient under adversarial attacks.
This paper introduces Resilient Guardrails for Large Language Models (RigorLLM), a novel framework designed to efficiently moderate harmful and unsafe inputs.
arXiv Detail & Related papers (2024-03-19T07:25:02Z) - Diffusion-Based Neural Network Weights Generation [80.89706112736353]
D2NWG is a diffusion-based neural network weights generation technique that efficiently produces high-performing weights for transfer learning.
Our method extends generative hyper-representation learning to recast the latent diffusion paradigm for neural network weights generation.
Our approach is scalable to large architectures such as large language models (LLMs), overcoming the limitations of current parameter generation techniques.
arXiv Detail & Related papers (2024-02-28T08:34:23Z) - Virtual Data Augmentation: A Robust and General Framework for
Fine-tuning Pre-trained Models [51.46732511844122]
Powerful pre-trained language models (PLM) can be fooled by small perturbations or intentional attacks.
We present Virtual Data Augmentation (VDA), a general framework for robustly fine-tuning PLMs.
Our approach is able to improve the robustness of PLMs and alleviate the performance degradation under adversarial attacks.
arXiv Detail & Related papers (2021-09-13T09:15:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.