ChartSumm: A Comprehensive Benchmark for Automatic Chart Summarization
of Long and Short Summaries
- URL: http://arxiv.org/abs/2304.13620v3
- Date: Sun, 11 Jun 2023 04:07:27 GMT
- Title: ChartSumm: A Comprehensive Benchmark for Automatic Chart Summarization
of Long and Short Summaries
- Authors: Raian Rahman, Rizvi Hasan, Abdullah Al Farhad, Md Tahmid Rahman
Laskar, Md. Hamjajul Ashmafee, Abu Raihan Mostofa Kamal
- Abstract summary: Automatic chart to text summarization is an effective tool for the visually impaired people.
In this paper, we propose ChartSumm: a large-scale benchmark dataset consisting of a total of 84,363 charts.
- Score: 0.26097841018267615
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automatic chart to text summarization is an effective tool for the visually
impaired people along with providing precise insights of tabular data in
natural language to the user. A large and well-structured dataset is always a
key part for data driven models. In this paper, we propose ChartSumm: a
large-scale benchmark dataset consisting of a total of 84,363 charts along with
their metadata and descriptions covering a wide range of topics and chart types
to generate short and long summaries. Extensive experiments with strong
baseline models show that even though these models generate fluent and
informative summaries by achieving decent scores in various automatic
evaluation metrics, they often face issues like suffering from hallucination,
missing out important data points, in addition to incorrect explanation of
complex trends in the charts. We also investigated the potential of expanding
ChartSumm to other languages using automated translation tools. These make our
dataset a challenging benchmark for future research.
Related papers
- Text2Chart31: Instruction Tuning for Chart Generation with Automatic Feedback [37.275533538711436]
We propose a hierarchical pipeline and a new dataset for chart generation.
Our dataset, Text2Chart31, includes 31 unique plot types referring to the Matplotlib library.
We introduce a reinforcement learning-based instruction tuning technique for chart generation tasks without requiring human feedback.
arXiv Detail & Related papers (2024-10-05T07:25:56Z) - On Pre-training of Multimodal Language Models Customized for Chart Understanding [83.99377088129282]
This paper explores the training processes necessary to improve MLLMs' comprehension of charts.
We introduce CHOPINLLM, an MLLM tailored for in-depth chart comprehension.
arXiv Detail & Related papers (2024-07-19T17:58:36Z) - ChartThinker: A Contextual Chain-of-Thought Approach to Optimized Chart Summarization [32.19963543411396]
This study constructs a large-scale dataset of comprehensive chart-caption pairs and fine-tuning instructions on each chart.
We propose an innovative chart summarization method, ChartThinker, which synthesizes deep analysis based on chains of thought.
Built upon the curated datasets, our trained model consistently exhibits superior performance in chart summarization tasks.
arXiv Detail & Related papers (2024-03-17T14:49:09Z) - ChartAssisstant: A Universal Chart Multimodal Language Model via
Chart-to-Table Pre-training and Multitask Instruction Tuning [54.89249749894061]
ChartAssistant is a vision-language model for universal chart comprehension and reasoning.
It undergoes a two-stage training process, starting with pre-training on chart-to-table parsing to align chart and text.
Experimental results demonstrate significant performance gains over the state-of-the-art UniChart and Chartllama method.
arXiv Detail & Related papers (2024-01-04T17:51:48Z) - ChartLlama: A Multimodal LLM for Chart Understanding and Generation [70.1393163657813]
We create a high-quality instruction-tuning dataset leveraging GPT-4.
Next, we introduce ChartLlama, a multi-modal large language model that we've trained using our created dataset.
arXiv Detail & Related papers (2023-11-27T15:20:23Z) - StructChart: Perception, Structuring, Reasoning for Visual Chart
Understanding [58.38480335579541]
Current chart-related tasks focus on either chart perception which refers to extracting information from the visual charts, or performing reasoning given the extracted data.
In this paper, we aim to establish a unified and label-efficient learning paradigm for joint perception and reasoning tasks.
Experiments are conducted on various chart-related tasks, demonstrating the effectiveness and promising potential for a unified chart perception-reasoning paradigm.
arXiv Detail & Related papers (2023-09-20T12:51:13Z) - UniChart: A Universal Vision-language Pretrained Model for Chart
Comprehension and Reasoning [29.947053208614246]
We present UniChart, a pretrained model for chart comprehension and reasoning.
UniChart encodes the relevant text, data, and visual elements of charts and then uses a chart-grounded text decoder to generate the expected output in natural language.
We propose several chart-specific pretraining tasks that include: (i) low-level tasks to extract the visual elements (e.g., bars, lines) and data from charts, and (ii) high-level tasks to acquire chart understanding and reasoning skills.
arXiv Detail & Related papers (2023-05-24T06:11:17Z) - QTSumm: Query-Focused Summarization over Tabular Data [58.62152746690958]
People primarily consult tables to conduct data analysis or answer specific questions.
We define a new query-focused table summarization task, where text generation models have to perform human-like reasoning.
We introduce a new benchmark named QTSumm for this task, which contains 7,111 human-annotated query-summary pairs over 2,934 tables.
arXiv Detail & Related papers (2023-05-23T17:43:51Z) - Chart-to-Text: A Large-Scale Benchmark for Chart Summarization [9.647079534077472]
We present Chart-to-text, a large-scale benchmark with two datasets and a total of 44,096 charts.
We explain the dataset construction process and analyze the datasets.
arXiv Detail & Related papers (2022-03-12T17:01:38Z) - Chart-to-Text: Generating Natural Language Descriptions for Charts by
Adapting the Transformer Model [6.320141734801679]
We introduce a new dataset and present a neural model for automatically generating natural language summaries for charts.
The generated summaries provide an interpretation of the chart and convey the key insights found within that chart.
arXiv Detail & Related papers (2020-10-18T23:57:33Z) - Open Graph Benchmark: Datasets for Machine Learning on Graphs [86.96887552203479]
We present the Open Graph Benchmark (OGB) to facilitate scalable, robust, and reproducible graph machine learning (ML) research.
OGB datasets are large-scale, encompass multiple important graph ML tasks, and cover a diverse range of domains.
For each dataset, we provide a unified evaluation protocol using meaningful application-specific data splits and evaluation metrics.
arXiv Detail & Related papers (2020-05-02T03:09:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.