AgentCTG: Harnessing Multi-Agent Collaboration for Fine-Grained Precise Control in Text Generation
- URL: http://arxiv.org/abs/2509.13677v1
- Date: Wed, 17 Sep 2025 04:07:22 GMT
- Title: AgentCTG: Harnessing Multi-Agent Collaboration for Fine-Grained Precise Control in Text Generation
- Authors: Xinxu Zhou, Jiaqi Bai, Zhenqi Sun, Fanxiang Zeng, Yue Liu,
- Abstract summary: Controlled Text Generation (CTG) continues to face numerous challenges, particularly in achieving fine-grained conditional control over generation.<n>This paper introduces a novel and scalable framework, AgentCTG, which aims to enhance precise and complex control over the text generation.<n>AgentCTG achieves state-of-the-art results on multiple public datasets.
- Score: 10.208001921870709
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although significant progress has been made in many tasks within the field of Natural Language Processing (NLP), Controlled Text Generation (CTG) continues to face numerous challenges, particularly in achieving fine-grained conditional control over generation. Additionally, in real scenario and online applications, cost considerations, scalability, domain knowledge learning and more precise control are required, presenting more challenge for CTG. This paper introduces a novel and scalable framework, AgentCTG, which aims to enhance precise and complex control over the text generation by simulating the control and regulation mechanisms in multi-agent workflows. We explore various collaboration methods among different agents and introduce an auto-prompt module to further enhance the generation effectiveness. AgentCTG achieves state-of-the-art results on multiple public datasets. To validate its effectiveness in practical applications, we propose a new challenging Character-Driven Rewriting task, which aims to convert the original text into new text that conform to specific character profiles and simultaneously preserve the domain knowledge. When applied to online navigation with role-playing, our approach significantly enhances the driving experience through improved content delivery. By optimizing the generation of contextually relevant text, we enable a more immersive interaction within online communities, fostering greater personalization and user engagement.
Related papers
- Controlling Multimodal Conversational Agents with Coverage-Enhanced Latent Actions [62.02112656288921]
reinforcement learning (RL) has been widely explored for adapting MCAs to various human-AI interaction scenarios.<n>We learn a compact latent action space for RL fine-tuning instead.<n>We leverage both paired image-text data and text-only data to construct the latent action space.
arXiv Detail & Related papers (2026-01-12T13:13:24Z) - PG-CE: A Progressive Generation Dataset with Constraint Enhancement for Controllable Text Generation [17.481794597546322]
Controllable Text Generation (CTG) has become a critical technology for enhancing system reliability and user experience.<n>This paper proposes the PG-CE approach, which decomposes CTG tasks into three steps: type prediction, constraint construction, and guided generation.
arXiv Detail & Related papers (2025-09-22T12:12:41Z) - ToolACE-MT: Non-Autoregressive Generation for Agentic Multi-Turn Interaction [84.90394416593624]
Agentic task-solving with Large Language Models (LLMs) requires multi-turn, multi-step interactions.<n>Existing simulation-based data generation methods rely heavily on costly autoregressive interactions between multiple agents.<n>We propose a novel Non-Autoregressive Iterative Generation framework, called ToolACE-MT, for constructing high-quality multi-turn agentic dialogues.
arXiv Detail & Related papers (2025-08-18T07:38:23Z) - BRIDGE: Bootstrapping Text to Control Time-Series Generation via Multi-Agent Iterative Optimization and Diffusion Modeling [51.830134409330704]
Time-series Generation (TSG) is a prominent research area with broad applications in simulations, data augmentation, and counterfactual analysis.<n>We argue that text can provide semantic insights, domain information and instance-specific temporal patterns, to guide and improve TSG.<n>We introduce BRIDGE, a hybrid text-controlled TSG framework that integrates semantic prototypes with text description for supporting domain-level guidance.
arXiv Detail & Related papers (2025-03-04T09:40:00Z) - Controllable Text Generation for Large Language Models: A Survey [27.110528099257156]
This paper systematically reviews the latest advancements in Controllable Text Generation for Large Language Models.
We categorize CTG tasks into two primary types: content control and control.
We address key challenges in current research, including reduced fluency and practicality.
arXiv Detail & Related papers (2024-08-22T17:59:04Z) - Successor Features for Efficient Multisubject Controlled Text Generation [48.37713738712319]
We introduce SF-GEN, which is grounded in two primary concepts: successor features (SFs) and language model rectification.
SF-GEN seamlessly integrates the two to enable dynamic steering of text generation with no need to alter the LLM's parameters.
To the best of our knowledge, our research represents the first application of successor features in text generation.
arXiv Detail & Related papers (2023-11-03T00:17:08Z) - Controllable Text Generation with Residual Memory Transformer [4.9329649616940205]
We propose a non-intrusive, lightweight control plugin to accompany the generation of CLM at arbitrary time steps.
The proposed plugin, namely Residual Memory Transformer (RMT), has an encoder-decoder setup, which can accept any types of control conditions.
Extensive experiments are carried out on various control tasks, in the form of both automatic and human evaluations.
arXiv Detail & Related papers (2023-09-28T08:13:33Z) - Deliberate then Generate: Enhanced Prompting Framework for Text
Generation [70.10319005141888]
Deliberate then Generate (DTG) prompting framework consists of error detection instructions and candidates that may contain errors.
We conduct extensive experiments on 20+ datasets across 7 text generation tasks, including summarization, translation, dialogue, and more.
We show that DTG consistently outperforms existing prompting methods and achieves state-of-the-art performance on multiple text generation tasks.
arXiv Detail & Related papers (2023-05-31T13:23:04Z) - Composable Text Controls in Latent Space with ODEs [97.12426987887021]
This paper proposes a new efficient approach for composable text operations in the compact latent space of text.
By connecting pretrained LMs to the latent space through efficient adaption, we then decode the sampled vectors into desired text sequences.
Experiments show that composing those operators within our approach manages to generate or edit high-quality text.
arXiv Detail & Related papers (2022-08-01T06:51:45Z) - Text Data Augmentation: Towards better detection of spear-phishing
emails [1.6556358263455926]
We propose a corpus and task augmentation framework to augment English texts within our company.
Our proposal combines different methods, utilizing BERT language model, multi-step back-translation and agnostics.
We show that our augmentation framework improves performances on several text classification tasks using publicly available models and corpora.
arXiv Detail & Related papers (2020-07-04T07:45:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.