ILLUME: Rationalizing Vision-Language Models through Human Interactions
- URL: http://arxiv.org/abs/2208.08241v4
- Date: Wed, 31 May 2023 15:13:15 GMT
- Title: ILLUME: Rationalizing Vision-Language Models through Human Interactions
- Authors: Manuel Brack, Patrick Schramowski, Bj\"orn Deiseroth and Kristian
Kersting
- Abstract summary: We propose a tuning paradigm based on human interactions with machine-generated data.
Our ILLUME executes the following loop: Given an image-question-answer prompt, the VLM samples multiple candidate rationales, and a human critic provides feedback via preference selection.
This loop increases the training data and gradually carves out the VLM's rationalization capabilities that are aligned with human intent.
- Score: 18.701950647429
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Bootstrapping from pre-trained language models has been proven to be an
efficient approach for building vision-language models (VLM) for tasks such as
image captioning or visual question answering. However, outputs of these models
rarely align with user's rationales for specific answers. In order to improve
this alignment and reinforce commonsense reasons, we propose a tuning paradigm
based on human interactions with machine-generated data. Our ILLUME executes
the following loop: Given an image-question-answer prompt, the VLM samples
multiple candidate rationales, and a human critic provides feedback via
preference selection, used for fine-tuning. This loop increases the training
data and gradually carves out the VLM's rationalization capabilities that are
aligned with human intent. Our exhaustive experiments demonstrate that ILLUME
is competitive with standard supervised finetuning while using significantly
fewer training data and only requiring minimal feedback.
Related papers
- VLFeedback: A Large-Scale AI Feedback Dataset for Large Vision-Language Models Alignment [55.7956150385255]
We investigate the efficacy of AI feedback to scale supervision for aligning vision-language models.
We introduce VLFeedback, the first large-scale vision-language feedback dataset.
We train Silkie, an LVLM fine-tuned via direct preference optimization on VLFeedback.
arXiv Detail & Related papers (2024-10-12T07:56:47Z) - Enhancing Large Vision Language Models with Self-Training on Image Comprehension [131.14381425260706]
We introduce Self-Training on Image (STIC), which emphasizes a self-training approach specifically for image comprehension.
First, the model self-constructs a preference for image descriptions using unlabeled images.
To further self-improve reasoning on the extracted visual information, we let the model reuse a small portion of existing instruction-tuning data.
arXiv Detail & Related papers (2024-05-30T05:53:49Z) - Enhancing Visual-Language Modality Alignment in Large Vision Language Models via Self-Improvement [102.22911097049953]
SIMA is a framework that enhances visual and language modality alignment through self-improvement.
It employs an in-context self-critic mechanism to select response pairs for preference tuning.
We demonstrate that SIMA achieves superior modality alignment, outperforming previous approaches.
arXiv Detail & Related papers (2024-05-24T23:09:27Z) - Calibrated Self-Rewarding Vision Language Models [27.686545023186852]
Large Vision-Language Models (LVLMs) have made substantial progress by integrating pre-trained large language models (LLMs) and vision models through instruction tuning.
LVLMs often exhibit the hallucination phenomenon, where generated text responses appear linguistically plausible but contradict the input image.
We propose the Calibrated Self-Rewarding (CSR) approach, which enables the model to self-improve by iteratively generating candidate responses, evaluating the reward for each response, and curating preference data for fine-tuning.
arXiv Detail & Related papers (2024-05-23T14:30:33Z) - Strengthening Multimodal Large Language Model with Bootstrapped Preference Optimization [25.290462963681257]
Multimodal Large Language Models (MLLMs) excel in generating responses based on visual inputs.
They often suffer from a bias towards generating responses similar to their pretraining corpus, overshadowing the importance of visual information.
We treat this bias as a "preference" for pretraining statistics, which hinders the model's grounding in visual input.
arXiv Detail & Related papers (2024-03-13T17:29:45Z) - Debiasing Multimodal Large Language Models [61.6896704217147]
Large Vision-Language Models (LVLMs) have become indispensable tools in computer vision and natural language processing.
Our investigation reveals a noteworthy bias in the generated content, where the output is primarily influenced by the underlying Large Language Models (LLMs) prior to the input image.
To rectify these biases and redirect the model's focus toward vision information, we introduce two simple, training-free strategies.
arXiv Detail & Related papers (2024-03-08T12:35:07Z) - Aligning Modalities in Vision Large Language Models via Preference
Fine-tuning [67.62925151837675]
In this work, we frame the hallucination problem as an alignment issue, tackle it with preference tuning.
Specifically, we propose POVID to generate feedback data with AI models.
We use ground-truth instructions as the preferred response and a two-stage approach to generate dispreferred data.
In experiments across broad benchmarks, we show that we can not only reduce hallucinations, but improve model performance across standard benchmarks, outperforming prior approaches.
arXiv Detail & Related papers (2024-02-18T00:56:16Z) - Training Language Models with Language Feedback at Scale [50.70091340506957]
We introduce learning from Language Feedback (ILF), a new approach that utilizes more informative language feedback.
ILF consists of three steps that are applied iteratively: first, conditioning the language model on the input, an initial LM output, and feedback to generate refinements.
We show theoretically that ILF can be viewed as Bayesian Inference, similar to Reinforcement Learning from human feedback.
arXiv Detail & Related papers (2023-03-28T17:04:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.