Answer-Me: Multi-Task Open-Vocabulary Visual Question Answering
- URL: http://arxiv.org/abs/2205.00949v1
- Date: Mon, 2 May 2022 14:53:13 GMT
- Title: Answer-Me: Multi-Task Open-Vocabulary Visual Question Answering
- Authors: AJ Piergiovanni, Wei Li, Weicheng Kuo, Mohammad Saffar, Fred Bertsch
and Anelia Angelova
- Abstract summary: We present Answer-Me, a task-aware multi-task framework.
We pre-train a vision-language joint model, which is multi-task as well.
Results show state-of-the-art performance, zero-shot generalization, robustness to forgetting, and competitive single-task results.
- Score: 43.07139534653485
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present Answer-Me, a task-aware multi-task framework which unifies a
variety of question answering tasks, such as, visual question answering, visual
entailment, visual reasoning. In contrast to previous works using contrastive
or generative captioning training, we propose a novel and simple recipe to
pre-train a vision-language joint model, which is multi-task as well. The
pre-training uses only noisy image captioning data, and is formulated to use
the entire architecture end-to-end with both a strong language encoder and
decoder. Our results show state-of-the-art performance, zero-shot
generalization, robustness to forgetting, and competitive single-task results
across a variety of question answering tasks. Our multi-task mixture training
learns from tasks of various question intents and thus generalizes better,
including on zero-shot vision-language tasks. We conduct experiments in the
challenging multi-task and open-vocabulary settings and across a variety of
datasets and tasks, such as VQA2.0, SNLI-VE, NLVR2, GQA, VizWiz. We observe
that the proposed approach is able to generalize to unseen tasks and that more
diverse mixtures lead to higher accuracy in both known and novel tasks.
Related papers
- Progressive Homeostatic and Plastic Prompt Tuning for Audio-Visual Multi-Task Incremental Learning [23.22385310060951]
We introduce a three-stage Progressive Homeostatic and Plastic audio-visual prompt (PHP) method.<n>In the shallow phase, we design the task-shared modality aggregating adapter to foster cross-task and cross-modal audio-visual representation learning.<n>In the middle phase, we propose the task-specific modality-shared dynamic generating adapter, which constructs prompts that are tailored to individual tasks.<n>In the deep phase, we introduce the task-specific modality-independent prompts to further refine the understand ability.
arXiv Detail & Related papers (2025-07-29T08:42:36Z) - Is Visual in-Context Learning for Compositional Medical Tasks within Reach? [68.56630652862293]
In this paper, we explore the potential of visual in-context learning to enable a single model to handle multiple tasks.<n>We introduce a novel method for training in-context learners using a synthetic compositional task generation engine.
arXiv Detail & Related papers (2025-07-01T15:32:23Z) - Musketeer: Joint Training for Multi-task Vision Language Model with Task Explanation Prompts [75.75548749888029]
We present a vision-language model whose parameters are jointly trained on all tasks and fully shared among multiple heterogeneous tasks.
With a single model, Musketeer achieves results comparable to or better than strong baselines trained on single tasks, almost uniformly across multiple tasks.
arXiv Detail & Related papers (2023-05-11T17:57:49Z) - MINOTAUR: Multi-task Video Grounding From Multimodal Queries [70.08973664126873]
We present a single, unified model for tackling query-based video understanding in long-form videos.
In particular, our model can address all three tasks of the Ego4D Episodic Memory benchmark.
arXiv Detail & Related papers (2023-02-16T04:00:03Z) - Multitask Vision-Language Prompt Tuning [103.5967011236282]
We propose multitask vision-language prompt tuning (MV)
MV incorporates cross-task knowledge into prompt tuning for vision-language models.
Results in 20 vision tasks demonstrate that the proposed approach outperforms all single-task baseline prompt tuning methods.
arXiv Detail & Related papers (2022-11-21T18:41:44Z) - Prompt Tuning with Soft Context Sharing for Vision-Language Models [42.61889428498378]
We propose a novel method to tune pre-trained vision-language models on multiple target few-shot tasks jointly.
We show that SoftCPT significantly outperforms single-task prompt tuning methods.
arXiv Detail & Related papers (2022-08-29T10:19:10Z) - Unified Multimodal Pre-training and Prompt-based Tuning for
Vision-Language Understanding and Generation [86.26522210882699]
We propose Unified multimodal pre-training for both Vision-Language understanding and generation.
The proposed UniVL is capable of handling both understanding tasks and generative tasks.
Our experiments show that there is a trade-off between understanding tasks and generation tasks while using the same model.
arXiv Detail & Related papers (2021-12-10T14:59:06Z) - Towards More Generalizable One-shot Visual Imitation Learning [81.09074706236858]
A general-purpose robot should be able to master a wide range of tasks and quickly learn a novel one by leveraging past experiences.
One-shot imitation learning (OSIL) approaches this goal by training an agent with (pairs of) expert demonstrations.
We push for a higher level of generalization ability by investigating a more ambitious multi-task setup.
arXiv Detail & Related papers (2021-10-26T05:49:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.