Localized Symbolic Knowledge Distillation for Visual Commonsense Models
- URL: http://arxiv.org/abs/2312.04837v2
- Date: Tue, 12 Dec 2023 05:48:14 GMT
- Title: Localized Symbolic Knowledge Distillation for Visual Commonsense Models
- Authors: Jae Sung Park, Jack Hessel, Khyathi Raghavi Chandu, Paul Pu Liang,
Ximing Lu, Peter West, Youngjae Yu, Qiuyuan Huang, Jianfeng Gao, Ali Farhadi,
Yejin Choi
- Abstract summary: We build Localized Visual Commonsense models, which allow users to specify (multiple) regions as input.
We train our model by sampling localized commonsense knowledge from a large language model.
We find that training on the localized commonsense corpus can successfully distill existing vision-language models to support a reference-as-input interface.
- Score: 150.18129140140238
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Instruction following vision-language (VL) models offer a flexible interface
that supports a broad range of multimodal tasks in a zero-shot fashion.
However, interfaces that operate on full images do not directly enable the user
to "point to" and access specific regions within images. This capability is
important not only to support reference-grounded VL benchmarks, but also, for
practical applications that require precise within-image reasoning. We build
Localized Visual Commonsense models, which allow users to specify (multiple)
regions as input. We train our model by sampling localized commonsense
knowledge from a large language model (LLM): specifically, we prompt an LLM to
collect commonsense knowledge given a global literal image description and a
local literal region description automatically generated by a set of VL models.
With a separately trained critic model that selects high-quality examples, we
find that training on the localized commonsense corpus can successfully distill
existing VL models to support a reference-as-input interface. Empirical results
and human evaluations in a zero-shot setup demonstrate that our distillation
method results in more precise VL models of reasoning compared to a baseline of
passing a generated referring expression to an LLM.
Related papers
- Teaching VLMs to Localize Specific Objects from In-context Examples [56.797110842152]
Vision-Language Models (VLMs) have shown remarkable capabilities across diverse visual tasks.
Current VLMs lack a fundamental cognitive ability: learning to localize objects in a scene by taking into account the context.
This work is the first to explore and benchmark personalized few-shot localization for VLMs.
arXiv Detail & Related papers (2024-11-20T13:34:22Z) - Harnessing Vision-Language Pretrained Models with Temporal-Aware Adaptation for Referring Video Object Segmentation [34.37450315995176]
Current Referring Video Object (RVOS) methods typically use vision and language models pretrained independently as backbones.
We propose a temporal-aware prompt-tuning method, which adapts pretrained representations for pixel-level prediction.
Our method performs favorably against state-of-the-art algorithms and exhibits strong generalization abilities.
arXiv Detail & Related papers (2024-05-17T08:14:22Z) - Multi-modal Instruction Tuned LLMs with Fine-grained Visual Perception [63.03288425612792]
We propose bfAnyRef, a general MLLM model that can generate pixel-wise object perceptions and natural language descriptions from multi-modality references.
Our model achieves state-of-the-art results across multiple benchmarks, including diverse modality referring segmentation and region-level referring expression generation.
arXiv Detail & Related papers (2024-03-05T13:45:46Z) - Large Language Models are Good Prompt Learners for Low-Shot Image Classification [12.053713356249695]
We propose LLaMP, Large Language Models as Prompt learners, that produces adaptive prompts for the CLIP text encoder.
Experiments show that, compared with other state-of-the-art prompt learning methods, LLaMP yields better performance on both zero-shot generalization and few-shot image classification.
arXiv Detail & Related papers (2023-12-07T06:43:34Z) - Context-Aware Prompt Tuning for Vision-Language Model with
Dual-Alignment [15.180715595425864]
We introduce a novel method to improve the prompt learning of vision-language models by incorporating pre-trained large language models (LLMs)
With DuAl-PT, we propose to learn more context-aware prompts, benefiting from both explicit and implicit context modeling.
Empirically, DuAl-PT achieves superior performance on 11 downstream datasets on few-shot recognition and base-to-new generalization.
arXiv Detail & Related papers (2023-09-08T06:51:15Z) - MetaVL: Transferring In-Context Learning Ability From Language Models to
Vision-Language Models [74.89629463600978]
In vision-language domain, most large-scale pre-trained vision-language models do not possess the ability to conduct in-context learning.
In this paper, we study an interesting hypothesis: can we transfer the in-context learning ability from the language domain to the vision domain?
arXiv Detail & Related papers (2023-06-02T07:21:03Z) - VL-Fields: Towards Language-Grounded Neural Implicit Spatial
Representations [15.265341472149034]
We present Visual-Language Fields (VL-Fields), a neural implicit spatial representation that enables open-vocabulary semantic queries.
Our model encodes and fuses the geometry of a scene with vision-language trained latent features by distilling information from a language-driven segmentation model.
arXiv Detail & Related papers (2023-05-21T10:55:27Z) - PEVL: Position-enhanced Pre-training and Prompt Tuning for
Vision-language Models [127.17675443137064]
We introduce PEVL, which enhances the pre-training and prompt tuning of vision-language models with explicit object position modeling.
PEVL reformulates discretized object positions and language in a unified language modeling framework.
We show that PEVL enables state-of-the-art performance on position-sensitive tasks such as referring expression comprehension and phrase grounding.
arXiv Detail & Related papers (2022-05-23T10:17:53Z) - Spatial Likelihood Voting with Self-Knowledge Distillation for Weakly
Supervised Object Detection [54.24966006457756]
We propose a WSOD framework called the Spatial Likelihood Voting with Self-knowledge Distillation Network (SLV-SD Net)
SLV-SD Net converges region proposal localization without bounding box annotations.
Experiments on the PASCAL VOC 2007/2012 and MS-COCO datasets demonstrate the excellent performance of SLV-SD Net.
arXiv Detail & Related papers (2022-04-14T11:56:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.