SGHA-Attack: Semantic-Guided Hierarchical Alignment for Transferable Targeted Attacks on Vision-Language Models
- URL: http://arxiv.org/abs/2602.01574v1
- Date: Mon, 02 Feb 2026 03:10:41 GMT
- Title: SGHA-Attack: Semantic-Guided Hierarchical Alignment for Transferable Targeted Attacks on Vision-Language Models
- Authors: Haobo Wang, Weiqi Luo, Xiaojun Jia, Xiaochun Cao,
- Abstract summary: Large vision-language models (VLMs) are vulnerable to transfer-based adversarial perturbations.<n>We propose SGHA-Attack, a framework that adopts multiple target references and enforces intermediate-layer consistency.<n>Experiments on open-source and commercial black-box VLMs show that SGHA-Attack achieves stronger targeted transferability than prior methods.
- Score: 73.19044613922911
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large vision-language models (VLMs) are vulnerable to transfer-based adversarial perturbations, enabling attackers to optimize on surrogate models and manipulate black-box VLM outputs. Prior targeted transfer attacks often overfit surrogate-specific embedding space by relying on a single reference and emphasizing final-layer alignment, which underutilizes intermediate semantics and degrades transfer across heterogeneous VLMs. To address this, we propose SGHA-Attack, a Semantic-Guided Hierarchical Alignment framework that adopts multiple target references and enforces intermediate-layer consistency. Concretely, we generate a visually grounded reference pool by sampling a frozen text-to-image model conditioned on the target prompt, and then carefully select the Top-K most semantically relevant anchors under the surrogate to form a weighted mixture for stable optimization guidance. Building on these anchors, SGHA-Attack injects target semantics throughout the feature hierarchy by aligning intermediate visual representations at both global and spatial granularities across multiple depths, and by synchronizing intermediate visual and textual features in a shared latent subspace to provide early cross-modal supervision before the final projection. Extensive experiments on open-source and commercial black-box VLMs show that SGHA-Attack achieves stronger targeted transferability than prior methods and remains robust under preprocessing and purification defenses.
Related papers
- TagaVLM: Topology-Aware Global Action Reasoning for Vision-Language Navigation [70.23578202012048]
Vision-Language Navigation (VLN) presents a unique challenge for Large Vision-Language Models (VLMs) due to their inherent architectural mismatch.<n>We propose TagaVLM (Topology-Aware Global Action reasoning), an end-to-end framework that explicitly injects topological structures into the VLM backbone.<n>To enhance topological node information, an Interleaved Navigation Prompt strengthens node-level visual-text alignment.<n>With the embedded topological graph, the model is capable of global action reasoning, allowing for robust path correction.
arXiv Detail & Related papers (2026-03-03T13:28:07Z) - AG-VAS: Anchor-Guided Zero-Shot Visual Anomaly Segmentation with Large Multimodal Models [21.682989096955467]
AG-VAS (Anchor-Guided Visual Anomaly) is a new framework that expands the LMM vocabulary with three learnable semantic anchor tokens.<n>AG-VAS achieves consistent state-of-the-art performance in the zero-shot setting.
arXiv Detail & Related papers (2026-03-01T22:25:23Z) - OmniVL-Guard: Towards Unified Vision-Language Forgery Detection and Grounding via Balanced RL [63.388513841293616]
Existing forgery detection methods fail to handle the interleaved text, images, and videos prevalent in real-world misinformation.<n>To bridge this gap, this paper targets to develop a unified framework for omnibus vision-language forgery detection and grounding.<n>We propose textbf OmniVL-Guard, a balanced reinforcement learning framework for omnibus vision-language forgery detection and grounding.
arXiv Detail & Related papers (2026-02-11T09:41:36Z) - Asymmetric Hierarchical Anchoring for Audio-Visual Joint Representation: Resolving Information Allocation Ambiguity for Robust Cross-Modal Generalization [19.721857318111734]
We propose Asymmetric Hierarchical Anchoring (AHA) to enforce directional information allocation.<n>We replace fragile mutual information estimators with a GRL-based adversarial decoupler that explicitly suppresses semantic leakage.<n>AHA consistently outperforms symmetric baselines in cross-modal transfer.
arXiv Detail & Related papers (2026-02-03T14:14:03Z) - SSVP: Synergistic Semantic-Visual Prompting for Industrial Zero-Shot Anomaly Detection [55.54007781679915]
We propose Synergistic Semantic-Visual Prompting (SSVP), that efficiently fuses diverse visual encodings to elevate model's fine-grained perception.<n>SSVP achieves state-of-the-art performance with 93.0% Image-AUROC and 92.2% Pixel-AUROC on MVTec-AD, significantly outperforming existing zero-shot approaches.
arXiv Detail & Related papers (2026-01-14T04:42:19Z) - Enhancing CLIP Robustness via Cross-Modality Alignment [54.01929554563447]
We propose Cross-modality Alignment, an optimal transport-based framework for vision-language models.<n> COLA restores global image-text alignment and local structural consistency in the feature space.<n> COLA is training-free and compatible with existing fine-tuned models.
arXiv Detail & Related papers (2025-10-28T03:47:44Z) - Target-Oriented Single Domain Generalization [27.182037614828968]
Deep models trained on a single source domain often fail catastrophically under distribution shifts.<n>We propose Target-Oriented Single Domain Generalization, a novel problem setup that leverages the textual description of the target domain.<n>We introduce Spectral TARget Alignment (STAR), a module that injects target semantics into source features.
arXiv Detail & Related papers (2025-08-30T04:21:48Z) - Enhancing Targeted Adversarial Attacks on Large Vision-Language Models via Intermediate Projector [24.390527651215944]
Black-box adversarial attacks pose a particularly severe threat to Large Vision-Language Models (VLMs)<n>We propose a novel black-box targeted attack framework that leverages the projector.<n> Specifically, we utilize the widely adopted Querying Transformer (Q-Former) which transforms global image embeddings into fine-grained query outputs.
arXiv Detail & Related papers (2025-08-19T11:23:09Z) - Improving Black-Box Generative Attacks via Generator Semantic Consistency [51.470649503929344]
generative attacks produce adversarial examples in a single forward pass at test time.<n>We enforce semantic consistency by aligning the early generator's intermediate features to an EMA teacher.<n>Our approach can be seamlessly integrated into existing generative attacks with consistent improvements in black-box transfer.
arXiv Detail & Related papers (2025-06-23T02:35:09Z) - Preserving Clusters in Prompt Learning for Unsupervised Domain Adaptation [29.809079908218607]
This work introduces a fresh solution to reinforce base pseudo-labels and facilitate target-prompt learning.<n>We first propose to leverage the reference predictions based on the relationship between source and target visual embeddings.<n>We later show that there is a strong clustering behavior observed between visual and text embeddings in pre-trained multi-modal models.
arXiv Detail & Related papers (2025-06-13T06:33:27Z) - Transferable Adversarial Attacks on SAM and Its Downstream Models [87.23908485521439]
This paper explores the feasibility of adversarial attacking various downstream models fine-tuned from the segment anything model (SAM)<n>To enhance the effectiveness of the adversarial attack towards models fine-tuned on unknown datasets, we propose a universal meta-initialization (UMI) algorithm.
arXiv Detail & Related papers (2024-10-26T15:04:04Z) - Generative Domain Adaptation for Face Anti-Spoofing [38.12738183385737]
Face anti-spoofing approaches based on unsupervised domain adaption (UDA) have drawn growing attention due to promising performances for target scenarios.
Most existing UDA FAS methods typically fit the trained models to the target domain via aligning the distribution of semantic high-level features.
We propose a novel perspective of UDA FAS that directly fits the target data to the models, stylizes the target data to the source-domain style via image translation, and further feeds the stylized data into the well-trained source model for classification.
arXiv Detail & Related papers (2022-07-20T16:24:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.