Adaptive Planning for Multi-Attribute Controllable Summarization with Monte Carlo Tree Search
- URL: http://arxiv.org/abs/2509.26435v1
- Date: Tue, 30 Sep 2025 15:55:24 GMT
- Title: Adaptive Planning for Multi-Attribute Controllable Summarization with Monte Carlo Tree Search
- Authors: Sangwon Ryu, Heejin Do, Yunsu Kim, Gary Geunbae Lee, Jungseul Ok,
- Abstract summary: We propose adaptive planning for multi-attribute controllable summarization (PACO)<n>PACO reframes the task as planning the order of sequential attribute control with a customized Monte Carlo Tree Search (MCTS)<n>Experiments across diverse domains and models demonstrate that PACO achieves robust multi-attribute controllability.
- Score: 42.54315734134824
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Controllable summarization moves beyond generic outputs toward human-aligned summaries guided by specified attributes. In practice, the interdependence among attributes makes it challenging for language models to satisfy correlated constraints consistently. Moreover, previous approaches often require per-attribute fine-tuning, limiting flexibility across diverse summary attributes. In this paper, we propose adaptive planning for multi-attribute controllable summarization (PACO), a training-free framework that reframes the task as planning the order of sequential attribute control with a customized Monte Carlo Tree Search (MCTS). In PACO, nodes represent summaries, and actions correspond to single-attribute adjustments, enabling progressive refinement of only the attributes requiring further control. This strategy adaptively discovers optimal control orders, ultimately producing summaries that effectively meet all constraints. Extensive experiments across diverse domains and models demonstrate that PACO achieves robust multi-attribute controllability, surpassing both LLM-based self-planning models and fine-tuned baselines. Remarkably, PACO with Llama-3.2-1B rivals the controllability of the much larger Llama-3.3-70B baselines. With larger models, PACO achieves superior control performance, outperforming all competitors.
Related papers
- Fine-Grained Model Merging via Modular Expert Recombination [33.253051407398836]
We propose MERGE, a method that enables component-wise model merging and input-aware, on-demand module recombination at inference.<n> MERGE formulates component-wise merging as a bi-objective optimization problem that balances cross-task performance and storage efficiency.<n>We show that MERGE consistently outperforms strong baselines and generalizes effectively.
arXiv Detail & Related papers (2026-02-06T09:55:56Z) - An Integrated Fusion Framework for Ensemble Learning Leveraging Gradient Boosting and Fuzzy Rule-Based Models [59.13182819190547]
Fuzzy rule-based models excel in interpretability and have seen widespread application across diverse fields.<n>They face challenges such as complex design specifications and scalability issues with large datasets.<n>This paper proposes an Integrated Fusion Framework that merges the strengths of both paradigms to enhance model performance and interpretability.
arXiv Detail & Related papers (2025-11-11T10:28:23Z) - Merge and Guide: Unifying Model Merging and Guided Decoding for Controllable Multi-Objective Generation [49.98025799046136]
We introduce Merge-And-GuidE, a two-stage framework that leverages model merging for guided decoding.<n>In Stage 1, MAGE resolves a compatibility problem between the guidance and base models.<n>In Stage 2, we merge explicit and implicit value models into a unified guidance proxy, which then steers the decoding of the base model from Stage 1.
arXiv Detail & Related papers (2025-10-04T11:10:07Z) - Single LLM, Multiple Roles: A Unified Retrieval-Augmented Generation Framework Using Role-Specific Token Optimization [64.33914369424494]
RoleRAG is a unified RAG framework that achieves efficient multi-task processing through role-specific token optimization.<n>RoleRAG comprises six modules, each handling a specific sub-task within the RAG process.<n>We introduce a query graph to represent the decomposition of the query, which can be dynamically resolved according to the decomposing state.
arXiv Detail & Related papers (2025-05-21T12:25:12Z) - Multi-Attribute Constraint Satisfaction via Language Model Rewriting [67.5778646504987]
Multi-Attribute Constraint Satisfaction (MACS) is a method capable of finetuning language models to satisfy user-specified constraints on multiple external real-value attributes.<n>Our work opens new avenues for generalized and real-value multi-attribute control, with implications for diverse applications spanning NLP and bioinformatics.
arXiv Detail & Related papers (2024-12-26T12:36:39Z) - Exploring Iterative Controllable Summarization with Large Language Models [22.80433394369022]
Large language models (LLMs) have demonstrated remarkable performance in abstractive summarization tasks.<n>Our findings show that LLMs struggle more with numerical attributes than with linguistic attributes.<n>We propose a guide-to-explain framework (GTE) for controllable summarization.
arXiv Detail & Related papers (2024-11-19T12:36:02Z) - MACSum: Controllable Summarization with Mixed Attributes [56.685735509260276]
MACSum is the first human-annotated summarization dataset for controlling mixed attributes.
We propose two simple and effective parameter-efficient approaches for the new task of mixed controllable summarization.
arXiv Detail & Related papers (2022-11-09T17:17:37Z) - Controllable Summarization with Constrained Markov Decision Process [50.04321779376415]
We study controllable text summarization which allows users to gain control on a particular attribute.
We propose a novel training framework based on Constrained Markov Decision Process (CMDP)
Our framework can be applied to control important attributes of summarization, including length, covered entities, and abstractiveness.
arXiv Detail & Related papers (2021-08-07T09:12:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.