ChartCheck: Explainable Fact-Checking over Real-World Chart Images
- URL: http://arxiv.org/abs/2311.07453v2
- Date: Fri, 16 Feb 2024 12:14:05 GMT
- Title: ChartCheck: Explainable Fact-Checking over Real-World Chart Images
- Authors: Mubashara Akhtar, Nikesh Subedi, Vivek Gupta, Sahar Tahmasebi, Oana
Cocarascu, Elena Simperl
- Abstract summary: We introduce ChartCheck, a novel, large-scale dataset for explainable fact-checking against real-world charts.
We systematically evaluate ChartCheck using vision-language and chart-to-table models, and propose a baseline to the community.
- Score: 11.172722085164281
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Whilst fact verification has attracted substantial interest in the natural
language processing community, verifying misinforming statements against data
visualizations such as charts has so far been overlooked. Charts are commonly
used in the real-world to summarize and communicate key information, but they
can also be easily misused to spread misinformation and promote certain
agendas. In this paper, we introduce ChartCheck, a novel, large-scale dataset
for explainable fact-checking against real-world charts, consisting of 1.7k
charts and 10.5k human-written claims and explanations. We systematically
evaluate ChartCheck using vision-language and chart-to-table models, and
propose a baseline to the community. Finally, we study chart reasoning types
and visual attributes that pose a challenge to these models
Related papers
- ChartGemma: Visual Instruction-tuning for Chart Reasoning in the Wild [28.643565008567172]
We introduce ChartGemma, a novel chart understanding and reasoning model developed over PaliGemma.
Rather than relying on underlying data tables, ChartGemma is trained on instruction-tuning data generated directly from chart images.
Our simple approach achieves state-of-the-art results across $5$ benchmarks spanning chart summarization, question answering, and fact-checking.
arXiv Detail & Related papers (2024-07-04T22:16:40Z) - ChartReformer: Natural Language-Driven Chart Image Editing [0.1712670816823812]
We propose ChartReformer, a natural language-driven chart image editing solution that directly edits the charts from the input images with the given instruction prompts.
To generalize ChartReformer, we define and standardize various types of chart editing, covering style, layout, format, and data-centric edits.
arXiv Detail & Related papers (2024-03-01T00:59:50Z) - ChartAssisstant: A Universal Chart Multimodal Language Model via
Chart-to-Table Pre-training and Multitask Instruction Tuning [54.89249749894061]
ChartAssistant is a vision-language model for universal chart comprehension and reasoning.
It undergoes a two-stage training process, starting with pre-training on chart-to-table parsing to align chart and text.
Experimental results demonstrate significant performance gains over the state-of-the-art UniChart and Chartllama method.
arXiv Detail & Related papers (2024-01-04T17:51:48Z) - Do LVLMs Understand Charts? Analyzing and Correcting Factual Errors in Chart Captioning [90.13978453378768]
We introduce a comprehensive typology of factual errors in generated chart captions.
A large-scale human annotation effort provides insight into the error patterns and frequencies in captions crafted by various chart captioning models.
Our analysis reveals that even state-of-the-art models, including GPT-4V, frequently produce captions laced with factual inaccuracies.
arXiv Detail & Related papers (2023-12-15T19:16:21Z) - StructChart: Perception, Structuring, Reasoning for Visual Chart
Understanding [58.38480335579541]
Current chart-related tasks focus on either chart perception which refers to extracting information from the visual charts, or performing reasoning given the extracted data.
In this paper, we aim to establish a unified and label-efficient learning paradigm for joint perception and reasoning tasks.
Experiments are conducted on various chart-related tasks, demonstrating the effectiveness and promising potential for a unified chart perception-reasoning paradigm.
arXiv Detail & Related papers (2023-09-20T12:51:13Z) - Enhanced Chart Understanding in Vision and Language Task via Cross-modal
Pre-training on Plot Table Pairs [71.55796212450055]
We introduce ChartT5, a V+L model that learns how to interpret table information from chart images via cross-modal pre-training on plot table pairs.
Specifically, we propose two novel pre-training objectives: Masked Header Prediction (MHP) and Masked Value Prediction (MVP)
arXiv Detail & Related papers (2023-05-29T22:29:03Z) - UniChart: A Universal Vision-language Pretrained Model for Chart
Comprehension and Reasoning [29.947053208614246]
We present UniChart, a pretrained model for chart comprehension and reasoning.
UniChart encodes the relevant text, data, and visual elements of charts and then uses a chart-grounded text decoder to generate the expected output in natural language.
We propose several chart-specific pretraining tasks that include: (i) low-level tasks to extract the visual elements (e.g., bars, lines) and data from charts, and (ii) high-level tasks to acquire chart understanding and reasoning skills.
arXiv Detail & Related papers (2023-05-24T06:11:17Z) - ChartReader: A Unified Framework for Chart Derendering and Comprehension
without Heuristic Rules [89.75395046894809]
We present ChartReader, a unified framework that seamlessly integrates chart derendering and comprehension tasks.
Our approach includes a transformer-based chart component detection module and an extended pre-trained vision-language model for chart-to-X tasks.
Our proposed framework can significantly reduce the manual effort involved in chart analysis, providing a step towards a universal chart understanding model.
arXiv Detail & Related papers (2023-04-05T00:25:27Z) - Chart-to-Text: A Large-Scale Benchmark for Chart Summarization [9.647079534077472]
We present Chart-to-text, a large-scale benchmark with two datasets and a total of 44,096 charts.
We explain the dataset construction process and analyze the datasets.
arXiv Detail & Related papers (2022-03-12T17:01:38Z) - Table2Charts: Recommending Charts by Learning Shared Table
Representations [61.68711232246847]
Table2Charts learns common patterns from a large corpus of (table, charts) pairs.
On a large spreadsheet corpus with 165k tables and 266k charts, we show that Table2Charts could learn a shared representation of table fields.
arXiv Detail & Related papers (2020-08-24T15:06:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.