Effective Training Data Synthesis for Improving MLLM Chart Understanding
- URL: http://arxiv.org/abs/2508.06492v1
- Date: Fri, 08 Aug 2025 17:59:10 GMT
- Title: Effective Training Data Synthesis for Improving MLLM Chart Understanding
- Authors: Yuwei Yang, Zeyu Zhang, Yunzhong Hou, Zhuowan Li, Gaowen Liu, Ali Payani, Yuan-Sen Ting, Liang Zheng,
- Abstract summary: We show that modularizing chart generation and diversifying visual details improves chart understanding capabilities.<n>In particular, we design a five-step data synthesis pipeline, where we separate data and function creation for single plot generation.<n>This approach allows us to streamline the generation of fine-tuning datasets.
- Score: 21.347586170711608
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Being able to effectively read scientific plots, or chart understanding, is a central part toward building effective agents for science. However, existing multimodal large language models (MLLMs), especially open-source ones, are still falling behind with a typical success rate of 30%-50% on challenging benchmarks. Previous studies on fine-tuning MLLMs with synthetic charts are often restricted by their inadequate similarity to the real charts, which could compromise model training and performance on complex real-world charts. In this study, we show that modularizing chart generation and diversifying visual details improves chart understanding capabilities. In particular, we design a five-step data synthesis pipeline, where we separate data and function creation for single plot generation, condition the generation of later subplots on earlier ones for multi-subplot figures, visually diversify the generated figures, filter out low quality data, and finally generate the question-answer (QA) pairs with GPT-4o. This approach allows us to streamline the generation of fine-tuning datasets and introduce the effective chart dataset (ECD), which contains 10k+ chart images and 300k+ QA pairs, covering 25 topics and featuring 250+ chart type combinations with high visual complexity. We show that ECD consistently improves the performance of various MLLMs on a range of real-world and synthetic test sets. Code, data and models are available at: https://github.com/yuweiyang-anu/ECD.
Related papers
- PlotCraft: Pushing the Limits of LLMs for Complex and Interactive Data Visualization [82.96200364977737]
We introduce PlotCraft, a new benchmark featuring 1k challenging visualization tasks.<n>PlotCraft is structured around seven high-level visualization tasks and encompasses 48 distinct chart types.<n>It is the first to systematically evaluate both single-turn generation and multi-turn refinement across a diverse spectrum of task complexities.
arXiv Detail & Related papers (2025-10-15T10:14:39Z) - ChartMaster: Advancing Chart-to-Code Generation with Real-World Charts and Chart Similarity Reinforcement Learning [64.4193334712998]
The chart-to-code generation task requires MLLMs to convert chart images into executable code.<n>This task faces two main challenges: limited data diversity and the difficulty of maintaining visual consistency between generated charts and the original ones.<n>We propose ReChartPrompt, leveraging real-world, human-designed charts extracted from arXiv papers as prompts.<n>We also propose ChartSimRL, a GRPO-based reinforcement learning algorithm guided by a novel chart similarity reward.
arXiv Detail & Related papers (2025-08-25T02:32:56Z) - BigCharts-R1: Enhanced Chart Reasoning with Visual Reinforcement Finetuning [51.472854950300416]
We propose BigCharts, a dataset creation pipeline that generates visually diverse chart images.<n>Unlike purely synthetic datasets, BigCharts incorporates real-world data, ensuring authenticity and visual diversity.<n>By introducing novel reward signals specifically designed for chart reasoning, our approach enhances model robustness and generalization.
arXiv Detail & Related papers (2025-08-13T13:39:17Z) - BRIDGES: Bridging Graph Modality and Large Language Models within EDA Tasks [12.683482535955314]
LLM performance suffers when graphs are represented as sequential text.<n>We introduce BRIDGES, a framework designed to incorporate graph modality into LLMs for EDA tasks.<n>Results demonstrate 2x to 10x improvements across multiple tasks compared to text-only baselines.
arXiv Detail & Related papers (2025-04-07T15:27:32Z) - RefChartQA: Grounding Visual Answer on Chart Images through Instruction Tuning [63.599057862999]
RefChartQA is a novel benchmark that integrates Chart Question Answering (ChartQA) with visual grounding.<n>Our experiments demonstrate that incorporating spatial awareness via grounding improves response accuracy by over 15%.
arXiv Detail & Related papers (2025-03-29T15:50:08Z) - Text2Chart31: Instruction Tuning for Chart Generation with Automatic Feedback [37.275533538711436]
We propose a hierarchical pipeline and a new dataset for chart generation.<n>Our dataset, Text2Chart31, includes 31 unique plot types referring to the Matplotlib library.<n>We introduce a reinforcement learning-based instruction tuning technique for chart generation tasks without requiring human feedback.
arXiv Detail & Related papers (2024-10-05T07:25:56Z) - SynChart: Synthesizing Charts from Language Models [50.73888371511983]
This work explores the potential of using LLMs alone for data generation and develop competitive multi-modality models focusing on chart understanding.
We construct a large-scale chart dataset, SynChart, which contains approximately 4 million diverse chart images with over 75 million dense annotations.
We trained a 4.2B chart-expert model using this dataset and achieve near-GPT-4O performance on the ChartQA task, surpassing GPT-4V.
arXiv Detail & Related papers (2024-09-25T00:18:12Z) - Advancing Multimodal Large Language Models in Chart Question Answering with Visualization-Referenced Instruction Tuning [1.6570772838074355]
multimodal large language models (MLLMs) exhibit great potential for chart question answering (CQA)
Recent efforts primarily focus on scaling up training datasets through data collection and synthesis.
We propose a visualization-referenced instruction tuning approach to guide the training dataset enhancement and model development.
arXiv Detail & Related papers (2024-07-29T17:04:34Z) - Challenging the Myth of Graph Collaborative Filtering: a Reasoned and Reproducibility-driven Analysis [50.972595036856035]
We present a code that successfully replicates results from six popular and recent graph recommendation models.
We compare these graph models with traditional collaborative filtering models that historically performed well in offline evaluations.
By investigating the information flow from users' neighborhoods, we aim to identify which models are influenced by intrinsic features in the dataset structure.
arXiv Detail & Related papers (2023-08-01T09:31:44Z) - Graph Contrastive Learning Automated [94.41860307845812]
Graph contrastive learning (GraphCL) has emerged with promising representation learning performance.
The effectiveness of GraphCL hinges on ad-hoc data augmentations, which have to be manually picked per dataset.
This paper proposes a unified bi-level optimization framework to automatically, adaptively and dynamically select data augmentations when performing GraphCL on specific graph data.
arXiv Detail & Related papers (2021-06-10T16:35:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.