Cross-Modal Safety Mechanism Transfer in Large Vision-Language Models
- URL: http://arxiv.org/abs/2410.12662v1
- Date: Wed, 16 Oct 2024 15:20:08 GMT
- Title: Cross-Modal Safety Mechanism Transfer in Large Vision-Language Models
- Authors: Shicheng Xu, Liang Pang, Yunchang Zhu, Huawei Shen, Xueqi Cheng,
- Abstract summary: Vision-language alignment in Large Vision-Language Models (LVLMs) successfully enables LLMs to understand visual input.
We find that existing vision-language alignment methods fail to transfer the existing safety mechanism for text in LLMs to vision.
We propose a novel Text-Guided vision-language alignment method (TGA) for LVLMs.
- Score: 72.75669790569629
- License:
- Abstract: Vision-language alignment in Large Vision-Language Models (LVLMs) successfully enables LLMs to understand visual input. However, we find that existing vision-language alignment methods fail to transfer the existing safety mechanism for text in LLMs to vision, which leads to vulnerabilities in toxic image. To explore the cause of this problem, we give the insightful explanation of where and how the safety mechanism of LVLMs operates and conduct comparative analysis between text and vision. We find that the hidden states at the specific transformer layers play a crucial role in the successful activation of safety mechanism, while the vision-language alignment at hidden states level in current methods is insufficient. This results in a semantic shift for input images compared to text in hidden states, therefore misleads the safety mechanism. To address this, we propose a novel Text-Guided vision-language Alignment method (TGA) for LVLMs. TGA retrieves the texts related to input vision and uses them to guide the projection of vision into the hidden states space in LLMs. Experiments show that TGA not only successfully transfers the safety mechanism for text in basic LLMs to vision in vision-language alignment for LVLMs without any safety fine-tuning on the visual modality but also maintains the general performance on various vision tasks (Safe and Good).
Related papers
- VLM-Guard: Safeguarding Vision-Language Models via Fulfilling Safety Alignment Gap [51.287157951953226]
Vision language models (VLMs) come with increased safety concerns.
VLMs can be built upon LLMs that have textual safety alignment, but it is easily undermined when the vision modality is integrated.
We propose VLM-Guard, an inference-time intervention strategy that leverages the LLM component of a VLM as supervision for the safety alignment of the VLM.
arXiv Detail & Related papers (2025-02-14T08:44:43Z) - AlignVLM: Bridging Vision and Language Latent Spaces for Multimodal Understanding [63.09928907734156]
AlignVLM is a vision-text alignment method that maps visual features to a weighted average of text embeddings.
Our experiments show that AlignVLM achieves state-of-the-art performance compared to prior alignment methods.
arXiv Detail & Related papers (2025-02-03T13:34:51Z) - SceneTAP: Scene-Coherent Typographic Adversarial Planner against Vision-Language Models in Real-World Environments [29.107550321162122]
We present the first approach to generate scene-coherent typographic adversarial attacks that mislead advanced vision-language models.
Our approach addresses three critical questions: what adversarial text to generate, where to place it within the scene, and how to integrate it seamlessly.
Our experiments show that our scene-coherent adversarial text successfully misleads state-of-the-art LVLMs.
arXiv Detail & Related papers (2024-11-28T05:55:13Z) - Unraveling and Mitigating Safety Alignment Degradation of Vision-Language Models [26.83278034227966]
The safety alignment ability of Vision-Language Models (VLMs) is prone to be degraded by the integration of the vision module.
We show that the challenge arises from the representation gap that emerges when introducing vision modality to VLMs.
To reduce safety alignment degradation, we introduce Cross-Modality Representation Manipulation (CMRM)
arXiv Detail & Related papers (2024-10-11T17:59:31Z) - TrojVLM: Backdoor Attack Against Vision Language Models [50.87239635292717]
This study introduces TrojVLM, the first exploration of backdoor attacks aimed at Vision Language Models (VLMs)
TrojVLM inserts predetermined target text into output text when encountering poisoned images.
A novel semantic preserving loss is proposed to ensure the semantic integrity of the original image content.
arXiv Detail & Related papers (2024-09-28T04:37:09Z) - VisionGPT: LLM-Assisted Real-Time Anomaly Detection for Safe Visual Navigation [3.837186701755568]
This paper explores the potential of Large Language Models in zero-shot anomaly detection for safe visual navigation.
The proposed framework can identify anomalies within camera-captured frames that include any possible obstacles, then generate concise, audio-delivered descriptions emphasizing abnormalities.
arXiv Detail & Related papers (2024-03-19T03:55:39Z) - Enhancing Visual Document Understanding with Contrastive Learning in
Large Visual-Language Models [56.76307866160105]
We propose a contrastive learning framework, termed Document Object COntrastive learning (DoCo)
DoCo leverages an auxiliary multimodal encoder to obtain the features of document objects and align them to the visual features generated by the vision encoder of Large Visual-Language Models (LVLMs)
We demonstrate that the proposed DoCo serves as a plug-and-play pre-training method, which can be employed in the pre-training of various LVLMs without inducing any increase in computational complexity during the inference process.
arXiv Detail & Related papers (2024-02-29T10:17:27Z) - Good Questions Help Zero-Shot Image Reasoning [110.1671684828904]
Question-Driven Visual Exploration (QVix) is a novel prompting strategy that enhances the exploratory capabilities of large vision-language models (LVLMs)
QVix enables a wider exploration of visual scenes, improving the LVLMs' reasoning accuracy and depth in tasks such as visual question answering and visual entailment.
Our evaluations on various challenging zero-shot vision-language benchmarks, including ScienceQA and fine-grained visual classification, demonstrate that QVix significantly outperforms existing methods.
arXiv Detail & Related papers (2023-12-04T03:18:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.