EfficientVLM: Fast and Accurate Vision-Language Models via Knowledge
Distillation and Modal-adaptive Pruning
- URL: http://arxiv.org/abs/2210.07795v1
- Date: Fri, 14 Oct 2022 13:26:41 GMT
- Title: EfficientVLM: Fast and Accurate Vision-Language Models via Knowledge
Distillation and Modal-adaptive Pruning
- Authors: Tiannan Wang, Wangchunshu Zhou, Yan Zeng, Xinsong Zhang
- Abstract summary: We introduce a distilling then pruning framework to compress large vision-language models into smaller, faster, and more accurate ones.
We apply our framework to train EfficientVLM, a fast and accurate vision-language model consisting of 6 vision layers, 3 text layers, and 3 cross-modal fusion layers.
EfficientVLM retains 98.4% performance of the teacher model and accelerates its inference speed by 2.2x.
- Score: 19.354515754130592
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Pre-trained vision-language models (VLMs) have achieved impressive results in
a range of vision-language tasks. However, popular VLMs usually consist of
hundreds of millions of parameters which brings challenges for fine-tuning and
deployment in real-world applications due to space, memory, and latency
constraints. In this work, we introduce a distilling then pruning framework to
compress large vision-language models into smaller, faster, and more accurate
ones. We first shrink the size of a pre-trained large VLM and apply knowledge
distillation in the vision-language pre-training stage to obtain a
task-agnostic compact VLM. Then we propose a modal-adaptive pruning algorithm
to automatically infer the importance of vision and language modalities for
different downstream tasks and adaptively remove redundant structures and
neurons in different encoders with controllable target sparsity. We apply our
framework to train EfficientVLM, a fast and accurate vision-language model
consisting of 6 vision layers, 3 text layers, and 3 cross-modal fusion layers,
accounting for only 93 million parameters in total, which is 44.3% of the
teacher model. EfficientVLM retains 98.4% performance of the teacher model and
accelerates its inference speed by 2.2x. EfficientVLM achieves a large absolute
improvement over previous SoTA efficient VLMs of similar sizes by a large
margin on various vision-language tasks, including VQAv2 (+4.9%), NLVR2
(+5.6%), ITR (R@1 on TR +17.2%, on IR + 15.6% ) and COCO caption generation
(CIDEr +6.5), demonstrating a large potential on training lightweight VLMs.
Related papers
- Eve: Efficient Multimodal Vision Language Models with Elastic Visual Experts [37.81475180129456]
We introduce the innovative framework of Efficient Vision Language Models with Elastic Visual Experts (Eve)
By strategically incorporating visual expertise at multiple stages of training, Eve strikes a balance between preserving linguistic abilities and augmenting multimodal capabilities.
Eve distinctly outperforms in language benchmarks and achieves state-of-the-art results 68.87% in VLM Benchmarks.
arXiv Detail & Related papers (2025-01-08T07:42:54Z) - VLsI: Verbalized Layers-to-Interactions from Large to Small Vision Language Models [63.27511432647797]
We propose VLsI: Verbalized Layers-to-Interactions, a new VLM family in 2B and 7B model sizes.
We validate VLsI across ten challenging vision-language benchmarks, achieving notable performance gains (11.0% for 2B and 17.4% for 7B) over GPT-4V.
arXiv Detail & Related papers (2024-12-02T18:58:25Z) - ADEM-VL: Adaptive and Embedded Fusion for Efficient Vision-Language Tuning [38.26304604660713]
ADEM-VL is an efficient vision-language method that tunes models based on pretrained large language models.
Our framework surpasses existing methods by an average accuracy of 0.77% on ScienceQA dataset.
arXiv Detail & Related papers (2024-10-23T11:31:06Z) - Mini-InternVL: A Flexible-Transfer Pocket Multimodal Model with 5% Parameters and 90% Performance [78.48606021719206]
Mini-InternVL is a series of MLLMs with parameters ranging from 1B to 4B, which achieves 90% of the performance with only 5% of the parameters.
We develop a unified adaptation framework for Mini-InternVL, which enables our models to transfer and outperform specialized models in downstream tasks.
arXiv Detail & Related papers (2024-10-21T17:58:20Z) - VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks [60.5257456681402]
We study the potential for building universal embeddings capable of handling a wide range of downstream tasks.
We build a series of VLM2Vec models on SoTA VLMs like Phi-3.5-V, LLaVA-1.6 and evaluate them on MMEB's evaluation split.
Our results show that VLM2Vec achieves an absolute average improvement of 10% to 20% over existing multimodal embedding models.
arXiv Detail & Related papers (2024-10-07T16:14:05Z) - NVLM: Open Frontier-Class Multimodal LLMs [64.00053046838225]
We introduce NVLM 1.0, a family of frontier-class multimodal large language models (LLMs) that achieve state-of-the-art results on vision-language tasks.
We propose a novel architecture that enhances both training efficiency and multimodal reasoning capabilities.
We develop production-grade multimodality for the NVLM-1.0 models, enabling them to excel in vision-language tasks.
arXiv Detail & Related papers (2024-09-17T17:59:06Z) - VILA: On Pre-training for Visual Language Models [74.08039416548209]
We study the design options for VLM pre-training through step-by-step controllable comparisons.
We build VILA, a Visual Language model family that consistently outperforms the state-of-the-art models.
arXiv Detail & Related papers (2023-12-12T18:58:18Z) - A Good Prompt Is Worth Millions of Parameters? Low-resource Prompt-based
Learning for Vision-Language Models [50.27305012063483]
FewVLM is a few-shot prompt-based learner on vision-language tasks.
We pretrain a sequence-to-sequence Transformer model with both prefix language modeling (PrefixLM) and masked language modeling (MaskedLM)
We observe that prompts significantly affect zero-shot performance but marginally affect few-shot performance.
arXiv Detail & Related papers (2021-10-16T06:07:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.