ChartEditor: A Reinforcement Learning Framework for Robust Chart Editing
- URL: http://arxiv.org/abs/2511.15266v1
- Date: Wed, 19 Nov 2025 09:27:37 GMT
- Title: ChartEditor: A Reinforcement Learning Framework for Robust Chart Editing
- Authors: Liangyu Chen, Yichen Xu, Jianzhe Ma, Yuqi Liu, Donglu Yang, Liang Zhang, Wenxuan Wang, Qin Jin,
- Abstract summary: We present ChartEditVista, a comprehensive benchmark consisting of 7,964 samples spanning 31 chart categories.<n>The inputs in ChartEditVista include only the original chart image and natural language editing instructions, without the original chart codes.<n>We also present ChartEditor, a model trained using a reinforcement learning framework that incorporates a novel rendering reward to simultaneously enforce code executability and visual fidelity.
- Score: 46.847377471580366
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Chart editing reduces manual effort in visualization design. Typical benchmarks limited in data diversity and assume access to complete chart code, which is seldom in real-world scenarios. To address this gap, we present ChartEditVista, a comprehensive benchmark consisting of 7,964 samples spanning 31 chart categories. It encompasses diverse editing instructions and covers nearly all editable chart elements. The inputs in ChartEditVista include only the original chart image and natural language editing instructions, without the original chart codes. ChartEditVista is generated through a fully automated pipeline that produces, edits, and verifies charts, ensuring high-quality chart editing data. Besides, we introduce two novel fine-grained, rule-based evaluation metrics: the layout metric, which evaluates the position, size and color of graphical components; and the text metric, which jointly assesses textual content and font styling. Building on top of ChartEditVista, we present ChartEditor, a model trained using a reinforcement learning framework that incorporates a novel rendering reward to simultaneously enforce code executability and visual fidelity. Through extensive experiments and human evaluations, we demonstrate that ChartEditVista provides a robust evaluation, while ChartEditor consistently outperforms models with similar-scale and larger-scale on chart editing tasks.
Related papers
- ChartE$^{3}$: A Comprehensive Benchmark for End-to-End Chart Editing [64.65742943745866]
ChartE$3$ is an End-to-End Chart Editing benchmark.<n>It directly evaluates models without relying on intermediate natural language programs or code-level supervision.<n>It contains over 1,200 high-quality samples constructed via a well-designed data pipeline with human curation.
arXiv Detail & Related papers (2026-01-29T13:29:27Z) - Charts Are Not Images: On the Challenges of Scientific Chart Editing [66.38730113476677]
textitFigEdit is a benchmark for scientific figure editing comprising over 30,000 samples.<n>Our benchmark demonstrates the profound limitations of pixel-level manipulation.<n>By releasing textitFigEdit, we aim to enable systematic progress in structure-aware figure editing.
arXiv Detail & Related papers (2025-11-30T06:13:48Z) - ChartLens: Fine-grained Visual Attribution in Charts [106.44872805609673]
Post-Hoc Visual Attribution for Charts identifies fine-grained chart elements that validate a given chart-associated response.<n>We propose ChartLens, a novel chart attribution algorithm that uses segmentation-based techniques to identify chart objects.<n>Our evaluations show that ChartLens improves fine-grained attributions by 26-66%.
arXiv Detail & Related papers (2025-05-25T23:17:32Z) - ChartEdit: How Far Are MLLMs From Automating Chart Analysis? Evaluating MLLMs' Capability via Chart Editing [6.671042213908933]
multimodal large language models (MLLMs) show promise in generating chart rendering code, but editing charts via code presents a greater challenge.<n>We propose textscChartEdit, a novel benchmark designed for chart editing tasks.<n>We evaluate the performance of 10 mainstream MLLMs across two types of experiments at both the code and chart levels.
arXiv Detail & Related papers (2025-05-17T09:47:15Z) - AskChart: Universal Chart Understanding through Textual Enhancement [20.075911012193494]
State-of-the-art approaches primarily focus on visual cues from chart images, failing to explicitly incorporate rich textual information embedded within the charts.<n>We introduce AskChart, a universal model that explicitly integrates both textual and visual cues from charts using a Mixture of Experts (MoE) architecture.
arXiv Detail & Related papers (2024-12-26T09:59:43Z) - ChartReformer: Natural Language-Driven Chart Image Editing [0.1712670816823812]
We propose ChartReformer, a natural language-driven chart image editing solution that directly edits the charts from the input images with the given instruction prompts.
To generalize ChartReformer, we define and standardize various types of chart editing, covering style, layout, format, and data-centric edits.
arXiv Detail & Related papers (2024-03-01T00:59:50Z) - ChartAssisstant: A Universal Chart Multimodal Language Model via
Chart-to-Table Pre-training and Multitask Instruction Tuning [54.89249749894061]
ChartAssistant is a vision-language model for universal chart comprehension and reasoning.
It undergoes a two-stage training process, starting with pre-training on chart-to-table parsing to align chart and text.
Experimental results demonstrate significant performance gains over the state-of-the-art UniChart and Chartllama method.
arXiv Detail & Related papers (2024-01-04T17:51:48Z) - ChartReader: A Unified Framework for Chart Derendering and Comprehension
without Heuristic Rules [89.75395046894809]
We present ChartReader, a unified framework that seamlessly integrates chart derendering and comprehension tasks.
Our approach includes a transformer-based chart component detection module and an extended pre-trained vision-language model for chart-to-X tasks.
Our proposed framework can significantly reduce the manual effort involved in chart analysis, providing a step towards a universal chart understanding model.
arXiv Detail & Related papers (2023-04-05T00:25:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.