CoDA: Agentic Systems for Collaborative Data Visualization
- URL: http://arxiv.org/abs/2510.03194v1
- Date: Fri, 03 Oct 2025 17:30:16 GMT
- Title: CoDA: Agentic Systems for Collaborative Data Visualization
- Authors: Zichen Chen, Jiefeng Chen, Sercan Ö. Arik, Misha Sra, Tomas Pfister, Jinsung Yoon,
- Abstract summary: Deep research has revolutionized data analysis, yet data scientists still devote substantial time to manually crafting visualizations.<n>Existing approaches, including simple single- or multi-agent systems, often oversimplify the task.<n>We introduce CoDA, a multi-agent system that employs specialized LLM agents for metadata analysis, task planning, code generation, and self-reflection.
- Score: 57.270599188947294
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep research has revolutionized data analysis, yet data scientists still devote substantial time to manually crafting visualizations, highlighting the need for robust automation from natural language queries. However, current systems struggle with complex datasets containing multiple files and iterative refinement. Existing approaches, including simple single- or multi-agent systems, often oversimplify the task, focusing on initial query parsing while failing to robustly manage data complexity, code errors, or final visualization quality. In this paper, we reframe this challenge as a collaborative multi-agent problem. We introduce CoDA, a multi-agent system that employs specialized LLM agents for metadata analysis, task planning, code generation, and self-reflection. We formalize this pipeline, demonstrating how metadata-focused analysis bypasses token limits and quality-driven refinement ensures robustness. Extensive evaluations show CoDA achieves substantial gains in the overall score, outperforming competitive baselines by up to 41.5%. This work demonstrates that the future of visualization automation lies not in isolated code generation but in integrated, collaborative agentic workflows.
Related papers
- A Hierarchical Multi-Agent System for Autonomous Discovery in Geoscientific Data Archives [0.0]
PANGAEA-GPT is a hierarchical multi-agent framework designed for autonomous data discovery and analysis.<n>Unlike standard Large Language Model (LLM) wrappers, our architecture implements a centralized Supervisor-Worker topology.<n>We demonstrate the system's capacity to execute complex, multi-step deterministic runtime with minimal human intervention.
arXiv Detail & Related papers (2026-02-24T20:37:38Z) - DataCross: A Unified Benchmark and Agent Framework for Cross-Modal Heterogeneous Data Analysis [8.171937411588015]
We introduce DataCross, a novel benchmark and collaborative agent framework for unified, insight-driven analysis.<n>DataCrossBench comprises 200 end-to-end analysis tasks across finance, healthcare, and other domains.<n>We also propose the DataCrossAgent framework, inspired by the "divide-and-synthesis" workflow of human analysts.
arXiv Detail & Related papers (2026-01-29T08:40:45Z) - Advances and Frontiers of LLM-based Issue Resolution in Software Engineering: A Comprehensive Survey [59.3507264893654]
Issue resolution is a complex Software Engineering task integral to real-world development.<n> benchmarks like SWE-bench revealed this task as profoundly difficult for large language models.<n>This paper presents a systematic survey of this emerging domain.
arXiv Detail & Related papers (2026-01-15T18:55:03Z) - Multi-Agent Systems for Dataset Adaptation in Software Engineering: Capabilities, Limitations, and Future Directions [8.97512410819274]
This paper presents the first empirical study on how state-of-the-art multi-agent systems perform in dataset adaptation tasks.<n>We evaluate GitHub Copilot on adapting SE research artifacts from benchmark repositories including ROCODE and LogHub2.0.<n>Results show that current systems can identify key files and generate partial adaptations but rarely produce correct implementations.
arXiv Detail & Related papers (2025-11-26T13:26:11Z) - Increasing LLM Coding Capabilities through Diverse Synthetic Coding Tasks [41.75017840131367]
Large language models (LLMs) have shown impressive promise in code generation.<n>We present a scalable synthetic data generation pipeline that produces nearly 800k instruction-reasoning-code-test quadruplets.
arXiv Detail & Related papers (2025-10-27T10:54:25Z) - Synthesizing Agentic Data for Web Agents with Progressive Difficulty Enhancement Mechanisms [81.90219895125178]
Web-based 'deep research' agents aim to solve complex question - answering tasks through long-horizon interactions with online tools.<n>These tasks remain challenging, as the underlying language models are often not optimized for long-horizon reasoning.<n>We introduce a two-pronged data synthesis pipeline that generates question - answer pairs by progressively increasing complexity.
arXiv Detail & Related papers (2025-10-15T06:34:46Z) - Benchmarking Deep Search over Heterogeneous Enterprise Data [73.55304268238474]
We present a new benchmark for evaluating a form of retrieval-augmented generation (RAG)<n>RAG requires source-aware, multi-hop reasoning over diverse, sparsed, but related sources.<n>We build it using a synthetic data pipeline that simulates business across product planning, development, and support stages.
arXiv Detail & Related papers (2025-06-29T08:34:59Z) - Deep Research Agents: A Systematic Examination And Roadmap [109.53237992384872]
Deep Research (DR) agents are designed to tackle complex, multi-turn informational research tasks.<n>In this paper, we conduct a detailed analysis of the foundational technologies and architectural components that constitute DR agents.
arXiv Detail & Related papers (2025-06-22T16:52:48Z) - TRAIL: Trace Reasoning and Agentic Issue Localization [5.025960714013197]
This work articulates the need for robust and dynamic evaluation methods for agentic workflow traces.<n>We present a set of 148 large human-annotated traces (TRAIL) constructed using this taxonomy and grounded in established agentic benchmarks.<n>To ensure ecological validity, we curate traces from both single and multi-agent systems.
arXiv Detail & Related papers (2025-05-13T14:55:31Z) - The AI Co-Ethnographer: How Far Can Automation Take Qualitative Research? [51.40252017262535]
The AI Co-Ethnographer (AICoE) is a novel end-to-end pipeline developed for qualitative research.<n>AICoE organizes the entire process, encompassing open coding, code consolidation, code application, and even pattern discovery.
arXiv Detail & Related papers (2025-04-21T21:31:28Z) - DatawiseAgent: A Notebook-Centric LLM Agent Framework for Adaptive and Robust Data Science Automation [10.390461679868197]
We introduce DatawiseAgent, a notebook-centric large language model (LLM) agent framework for adaptive and robust data science automation.<n>Inspired by how human data scientists work in computational notebooks, DatawiseAgent introduces a unified interaction representation and a multi-stage architecture.
arXiv Detail & Related papers (2025-03-10T08:32:33Z) - What are the Essential Factors in Crafting Effective Long Context Multi-Hop Instruction Datasets? Insights and Best Practices [91.71951459594074]
Long language models (LLMs) with extended context windows have significantly improved tasks such as information extraction, question answering, and complex planning scenarios.<n>Existing methods typically utilize the Self-Instruct framework to generate instruction tuning data for better long context capability improvement.<n>We propose the Multi-agent Interactive Multi-hop Generation framework, incorporating a Quality Verification Agent, a Single-hop Question Generation Agent, a Multiple Question Sampling Strategy, and a Multi-hop Question Merger Agent.<n>Our findings show that our synthetic high-quality long-context instruction data significantly enhances model performance, even surpassing models trained on larger amounts of human
arXiv Detail & Related papers (2024-09-03T13:30:00Z) - DCA-Bench: A Benchmark for Dataset Curation Agents [9.60250892491588]
Data quality issues, such as incomplete documentation, inaccurate labels, ethical concerns, and outdated information, remain common in widely used datasets.<n>With the surging ability of large language models (LLM), it's promising to streamline the discovery of hidden dataset issues with LLM agents.<n>In this work, we establish a benchmark to measure LLM agent's ability to tackle this challenge.
arXiv Detail & Related papers (2024-06-11T14:02:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.