Collaborative Chain-of-Agents for Parametric-Retrieved Knowledge Synergy
- URL: http://arxiv.org/abs/2508.01696v2
- Date: Tue, 05 Aug 2025 08:00:17 GMT
- Title: Collaborative Chain-of-Agents for Parametric-Retrieved Knowledge Synergy
- Authors: Yi Jiang, Sendong Zhao, Jianbo Li, Haochun Wang, Lizhe Zhang, Yan Liu, Bing Qin,
- Abstract summary: Collaborative Chain-of-Agents is a framework designed to enhance synergy over both parametric and retrieved knowledge.<n>CoCoA-zero and CoCoA achieve superior performance on open-domain and multi-hop QA tasks.
- Score: 31.03800299571446
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Retrieval-Augmented Generation (RAG) has emerged as a promising framework for enhancing the capabilities of Large Language Models (LLMs), especially in knowledge-intensive tasks. Despite its advantages, current RAG methods often struggle to *fully exploit knowledge during generation*. In particular, the synergy between the model's internal parametric knowledge and external retrieved knowledge remains limited. Retrieved contents may sometimes mislead generation, while certain generated content can guide the model toward more accurate outputs. In this work, we propose Collaborative Chain-of-Agents, a framework designed to enhance explicitly synergy over both parametric and retrieved knowledge. Specifically, we first introduce CoCoA-zero, a multi-agent RAG framework that first performs conditional knowledge induction and then reasons answers. Building on this, we develop CoCoA, a long-chain training strategy that synthesizes extended multi-agent reasoning trajectories from CoCoA-zero to fine-tune the LLM. This strategy enhances the model's capability to explicitly integrate and jointly leverage parametric and retrieved knowledge. Experiments results show that CoCoA-zero and CoCoA achieve superior performance on open-domain and multi-hop QA tasks.
Related papers
- SoftPipe: A Soft-Guided Reinforcement Learning Framework for Automated Data Preparation [10.764970149373845]
We introduce SoftPipe, a novel RL framework that replaces rigid hard constraints'' with a flexible soft guidance'' paradigm.<n>We demonstrate that SoftPipe achieves up to a 13.9% improvement in pipeline quality and 2.8$times$ faster convergence compared to existing methods.
arXiv Detail & Related papers (2025-07-18T07:43:22Z) - Knowledgeable-r1: Policy Optimization for Knowledge Exploration in Retrieval-Augmented Generation [6.870247946243668]
Retrieval-augmented generation (RAG) is a mainstream method for improving performance on knowledge-intensive tasks.<n>We propose Knowledgeable-r1, that uses joint sampling and define multi policy distributions in knowledge capability exploration.<n> Experiments show that Knowledgeable-r1 significantly enhances robustness and reasoning accuracy in both parameters and contextual conflict tasks.
arXiv Detail & Related papers (2025-06-05T15:34:15Z) - CoRe-MMRAG: Cross-Source Knowledge Reconciliation for Multimodal RAG [53.950029990391066]
Cross-source knowledge textbfReconciliation for Multimodal RAG (CoRe-MMRAG)<n>We propose a novel end-to-end framework that effectively reconciles inconsistencies across knowledge sources.<n>Experiments on KB-VQA benchmarks show that CoRe-MMRAG achieves substantial improvements over baseline methods.
arXiv Detail & Related papers (2025-06-03T07:32:40Z) - GenKI: Enhancing Open-Domain Question Answering with Knowledge Integration and Controllable Generation in Large Language Models [75.25348392263676]
Open-domain question answering (OpenQA) represents a cornerstone in natural language processing (NLP)<n>We propose a novel framework named GenKI, which aims to improve the OpenQA performance by exploring Knowledge Integration and controllable Generation.
arXiv Detail & Related papers (2025-05-26T08:18:33Z) - CoRAG: Collaborative Retrieval-Augmented Generation [26.748765050034876]
CoRAG is a framework extendingtrieval-Augmented Generation (RAG) models to collaborative settings.<n>We show CoRAG consistently outperforms both parametric collaborative learning methods and locally trained RAG models in low-resource scenarios.<n>This introduces a novel consideration in collaborative RAG: the trade-off between leveraging a collectively enriched knowledge base and the potential risk of incorporating detrimental passages from other clients.
arXiv Detail & Related papers (2025-04-02T16:40:43Z) - RAG-KG-IL: A Multi-Agent Hybrid Framework for Reducing Hallucinations and Enhancing LLM Reasoning through RAG and Incremental Knowledge Graph Learning Integration [4.604003661048267]
RAG-KG-IL is a novel multi-agent hybrid framework designed to enhance the reasoning capabilities of Large Language Models.<n>It integrates Retrieval-Augmented Generation (RAG) and Knowledge Graphs (KGs) with an Incremental Learning (IL) approach.<n>We evaluate the framework using real-world case studies involving health-related queries.
arXiv Detail & Related papers (2025-03-14T11:50:16Z) - CoAT: Chain-of-Associated-Thoughts Framework for Enhancing Large Language Models Reasoning [0.8192907805418583]
Chain-of-Associated-Thoughts (CoAT) framework introduces an innovative synergy between the Monte Carlo Tree Search (MCTS) algorithm and a dynamic mechanism for integrating new key information, termed 'associative memory'<n>By combining the structured exploration capabilities of MCTS with the adaptive learning capacity of associative memory, CoAT significantly expands the LLM search space, enabling our framework to explore diverse reasoning pathways and dynamically update its knowledge base in real-time.<n>These experiments demonstrated that our framework outperforms conventional inference processes on accuracy, coherence, and diversity.
arXiv Detail & Related papers (2025-02-04T15:10:33Z) - Vintix: Action Model via In-Context Reinforcement Learning [72.65703565352769]
We present the first steps toward scaling ICRL by introducing a fixed, cross-domain model capable of learning behaviors through in-context reinforcement learning.<n>Our results demonstrate that Algorithm Distillation, a framework designed to facilitate ICRL, offers a compelling and competitive alternative to expert distillation to construct versatile action models.
arXiv Detail & Related papers (2025-01-31T18:57:08Z) - Improving Retrieval-Augmented Generation through Multi-Agent Reinforcement Learning [51.54046200512198]
Retrieval-augmented generation (RAG) is extensively utilized to incorporate external, current knowledge into large language models.<n>A standard RAG pipeline may comprise several components, such as query rewriting, document retrieval, document filtering, and answer generation.<n>To overcome these challenges, we propose treating the RAG pipeline as a multi-agent cooperative task, with each component regarded as an RL agent.
arXiv Detail & Related papers (2025-01-25T14:24:50Z) - Learning Multi-Branch Cooperation for Enhanced Click-Through Rate Prediction at Taobao [51.84189885218365]
We introduce a novel Multi-Branch Cooperation Network (MBCnet)<n>MBCnet enables multiple branch networks to collaborate with each other for better complex feature interaction modeling.<n> Experiments on large-scale industrial datasets and online A/B test at app demonstrate MBCnet's superior performance.
arXiv Detail & Related papers (2024-11-20T06:10:06Z) - GIVE: Structured Reasoning of Large Language Models with Knowledge Graph Inspired Veracity Extrapolation [108.2008975785364]
Graph Inspired Veracity Extrapolation (GIVE) is a novel reasoning method that merges parametric and non-parametric memories to improve accurate reasoning with minimal external input.<n>GIVE guides the LLM agent to select the most pertinent expert data (observe), engage in query-specific divergent thinking (reflect), and then synthesize this information to produce the final output (speak)
arXiv Detail & Related papers (2024-10-11T03:05:06Z) - Pistis-RAG: Enhancing Retrieval-Augmented Generation with Human Feedback [41.88662700261036]
RAG systems face limitations when semantic relevance alone does not guarantee improved generation quality.
We propose Pistis-RAG, a new RAG framework designed with a content-centric approach to better align LLMs with human preferences.
arXiv Detail & Related papers (2024-06-21T08:52:11Z) - KG-RAG: Bridging the Gap Between Knowledge and Creativity [0.0]
Large Language Model Agents (LMAs) face issues such as information hallucinations, catastrophic forgetting, and limitations in processing long contexts.
This paper introduces a KG-RAG (Knowledge Graph-Retrieval Augmented Generation) pipeline to enhance the knowledge capabilities of LMAs.
Preliminary experiments on the ComplexWebQuestions dataset demonstrate notable improvements in the reduction of hallucinated content.
arXiv Detail & Related papers (2024-05-20T14:03:05Z) - MAgIC: Investigation of Large Language Model Powered Multi-Agent in Cognition, Adaptability, Rationality and Collaboration [98.18244218156492]
Large Language Models (LLMs) have significantly advanced natural language processing.<n>As their applications expand into multi-agent environments, there arises a need for a comprehensive evaluation framework.<n>This work introduces a novel competition-based benchmark framework to assess LLMs within multi-agent settings.
arXiv Detail & Related papers (2023-11-14T21:46:27Z) - RACA: Relation-Aware Credit Assignment for Ad-Hoc Cooperation in
Multi-Agent Deep Reinforcement Learning [55.55009081609396]
We propose a novel method, called Relation-Aware Credit Assignment (RACA), which achieves zero-shot generalization in ad-hoc cooperation scenarios.
RACA takes advantage of a graph-based encoder relation to encode the topological structure between agents.
Our method outperforms baseline methods on the StarCraftII micromanagement benchmark and ad-hoc cooperation scenarios.
arXiv Detail & Related papers (2022-06-02T03:39:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.