Analyzing the Efficacy of an LLM-Only Approach for Image-based Document
Question Answering
- URL: http://arxiv.org/abs/2309.14389v1
- Date: Mon, 25 Sep 2023 07:01:16 GMT
- Title: Analyzing the Efficacy of an LLM-Only Approach for Image-based Document
Question Answering
- Authors: Nidhi Hegde, Sujoy Paul, Gagan Madan, Gaurav Aggarwal
- Abstract summary: We study the relative contributions of the vision encoder and the language model in document question answering tasks.
Our comprehensive analysis encompasses six diverse benchmark datasets, utilizing LLMs of varying scales.
Our findings reveal that a strategy exclusively reliant on the LLM yields results that are on par with or closely approach state-of-the-art performance.
- Score: 12.064056743478865
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent document question answering models consist of two key components: the
vision encoder, which captures layout and visual elements in images, and a
Large Language Model (LLM) that helps contextualize questions to the image and
supplements them with external world knowledge to generate accurate answers.
However, the relative contributions of the vision encoder and the language
model in these tasks remain unclear. This is especially interesting given the
effectiveness of instruction-tuned LLMs, which exhibit remarkable adaptability
to new tasks. To this end, we explore the following aspects in this work: (1)
The efficacy of an LLM-only approach on document question answering tasks (2)
strategies for serializing textual information within document images and
feeding it directly to an instruction-tuned LLM, thus bypassing the need for an
explicit vision encoder (3) thorough quantitative analysis on the feasibility
of such an approach. Our comprehensive analysis encompasses six diverse
benchmark datasets, utilizing LLMs of varying scales. Our findings reveal that
a strategy exclusively reliant on the LLM yields results that are on par with
or closely approach state-of-the-art performance across a range of datasets. We
posit that this evaluation framework will serve as a guiding resource for
selecting appropriate datasets for future research endeavors that emphasize the
fundamental importance of layout and image content information.
Related papers
- Web-Scale Visual Entity Recognition: An LLM-Driven Data Approach [56.55633052479446]
Web-scale visual entity recognition presents significant challenges due to the lack of clean, large-scale training data.
We propose a novel methodology to curate such a dataset, leveraging a multimodal large language model (LLM) for label verification, metadata generation, and rationale explanation.
Experiments demonstrate that models trained on this automatically curated data achieve state-of-the-art performance on web-scale visual entity recognition tasks.
arXiv Detail & Related papers (2024-10-31T06:55:24Z) - FineCops-Ref: A new Dataset and Task for Fine-Grained Compositional Referring Expression Comprehension [10.482908189805872]
Referring Expression (REC) is a crucial cross-modal task that objectively evaluates the capabilities of language understanding, image comprehension, and language-to-image grounding.
We have established a new REC dataset characterized by two key features.
It includes negative text and images created through fine-grained editing and generation based on existing data.
arXiv Detail & Related papers (2024-09-23T06:56:51Z) - Leveraging the Power of LLMs: A Fine-Tuning Approach for High-Quality Aspect-Based Summarization [25.052557735932535]
Large language models (LLMs) have demonstrated the potential to revolutionize diverse tasks within natural language processing.
This paper explores the potential of fine-tuning LLMs for the aspect-based summarization task.
We evaluate the impact of fine-tuning open-source foundation LLMs, including Llama2, Mistral, Gemma and Aya, on a publicly available domain-specific aspect based summary dataset.
arXiv Detail & Related papers (2024-08-05T16:00:21Z) - Large Vision-Language Models as Emotion Recognizers in Context Awareness [14.85890824622433]
Context-aware emotion recognition (CAER) is a complex and significant task that requires perceiving emotions from various contextual cues.
Previous approaches primarily focus on designing sophisticated architectures to extract emotional cues from images.
This paper systematically explore the potential of leveraging Large Vision-Language Models (LVLMs) to empower the CAER task.
arXiv Detail & Related papers (2024-07-16T01:28:06Z) - VEGA: Learning Interleaved Image-Text Comprehension in Vision-Language Large Models [76.94378391979228]
We introduce a new, more demanding task known as Interleaved Image-Text (IITC)
This task challenges models to discern and disregard superfluous elements in both images and text to accurately answer questions.
In support of this task, we further craft a new VEGA dataset, tailored for the IITC task on scientific content, and devised a subtask, Image-Text Association (ITA)
arXiv Detail & Related papers (2024-06-14T17:59:40Z) - Debiasing Multimodal Large Language Models [61.6896704217147]
Large Vision-Language Models (LVLMs) have become indispensable tools in computer vision and natural language processing.
Our investigation reveals a noteworthy bias in the generated content, where the output is primarily influenced by the underlying Large Language Models (LLMs) prior to the input image.
To rectify these biases and redirect the model's focus toward vision information, we introduce two simple, training-free strategies.
arXiv Detail & Related papers (2024-03-08T12:35:07Z) - Enhancing Visual Document Understanding with Contrastive Learning in
Large Visual-Language Models [56.76307866160105]
We propose a contrastive learning framework, termed Document Object COntrastive learning (DoCo)
DoCo leverages an auxiliary multimodal encoder to obtain the features of document objects and align them to the visual features generated by the vision encoder of Large Visual-Language Models (LVLMs)
We demonstrate that the proposed DoCo serves as a plug-and-play pre-training method, which can be employed in the pre-training of various LVLMs without inducing any increase in computational complexity during the inference process.
arXiv Detail & Related papers (2024-02-29T10:17:27Z) - LLM Inference Unveiled: Survey and Roofline Model Insights [62.92811060490876]
Large Language Model (LLM) inference is rapidly evolving, presenting a unique blend of opportunities and challenges.
Our survey stands out from traditional literature reviews by not only summarizing the current state of research but also by introducing a framework based on roofline model.
This framework identifies the bottlenecks when deploying LLMs on hardware devices and provides a clear understanding of practical problems.
arXiv Detail & Related papers (2024-02-26T07:33:05Z) - AVIS: Autonomous Visual Information Seeking with Large Language Model
Agent [123.75169211547149]
We propose an autonomous information seeking visual question answering framework, AVIS.
Our method leverages a Large Language Model (LLM) to dynamically strategize the utilization of external tools.
AVIS achieves state-of-the-art results on knowledge-intensive visual question answering benchmarks such as Infoseek and OK-VQA.
arXiv Detail & Related papers (2023-06-13T20:50:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.