OTI: A Model-free and Visually Interpretable Measure of Image Attackability
- URL: http://arxiv.org/abs/2601.17536v1
- Date: Sat, 24 Jan 2026 17:32:04 GMT
- Title: OTI: A Model-free and Visually Interpretable Measure of Image Attackability
- Authors: Jiaming Liang, Haowei Liu, Chi-Man Pun,
- Abstract summary: Evaluating image attackability has important applications in active learning, adversarial training, and attack enhancement.<n>Existing methods are scarce and suffer from two major limitations.<n>We propose a novel Object Texture Intensity (OTI) which measures image attackability as the texture intensity of the image's semantic object.
- Score: 36.70653927380118
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite the tremendous success of neural networks, benign images can be corrupted by adversarial perturbations to deceive these models. Intriguingly, images differ in their attackability. Specifically, given an attack configuration, some images are easily corrupted, whereas others are more resistant. Evaluating image attackability has important applications in active learning, adversarial training, and attack enhancement. This prompts a growing interest in developing attackability measures. However, existing methods are scarce and suffer from two major limitations: (1) They rely on a model proxy to provide prior knowledge (e.g., gradients or minimal perturbation) to extract model-dependent image features. Unfortunately, in practice, many task-specific models are not readily accessible. (2) Extracted features characterizing image attackability lack visual interpretability, obscuring their direct relationship with the images. To address these, we propose a novel Object Texture Intensity (OTI), a model-free and visually interpretable measure of image attackability, which measures image attackability as the texture intensity of the image's semantic object. Theoretically, we describe the principles of OTI from the perspectives of decision boundaries as well as the mid- and high-frequency characteristics of adversarial perturbations. Comprehensive experiments demonstrate that OTI is effective and computationally efficient. In addition, our OTI provides the adversarial machine learning community with a visual understanding of attackability.
Related papers
- C-LEAD: Contrastive Learning for Enhanced Adversarial Defense [21.31610891219127]
Deep neural networks (DNNs) have achieved remarkable success in computer vision tasks such as image classification, segmentation, and object detection.<n>They are vulnerable to adversarial attacks, which can cause incorrect predictions with small perturbations in input images.<n>This paper presents a novel approach that utilizes contrastive learning for adversarial defense.
arXiv Detail & Related papers (2025-10-31T07:32:55Z) - Natural Language Induced Adversarial Images [14.415478695871604]
We propose a natural language induced adversarial image attack method.
The core idea is to leverage a text-to-image model to generate adversarial images given input prompts.
In experiments, we found that some high-frequency semantic information such as "foggy", "humid", "stretching" can easily cause errors.
arXiv Detail & Related papers (2024-10-11T08:36:07Z) - Towards Robust Image Stitching: An Adaptive Resistance Learning against
Compatible Attacks [66.98297584796391]
Image stitching seamlessly integrates images captured from varying perspectives into a single wide field-of-view image.
Given a pair of captured images, subtle perturbations and distortions which go unnoticed by the human visual system tend to attack the correspondence matching.
This paper presents the first attempt to improve the robustness of image stitching against adversarial attacks.
arXiv Detail & Related papers (2024-02-25T02:36:33Z) - Adv-Diffusion: Imperceptible Adversarial Face Identity Attack via Latent
Diffusion Model [61.53213964333474]
We propose a unified framework Adv-Diffusion that can generate imperceptible adversarial identity perturbations in the latent space but not the raw pixel space.
Specifically, we propose the identity-sensitive conditioned diffusion generative model to generate semantic perturbations in the surroundings.
The designed adaptive strength-based adversarial perturbation algorithm can ensure both attack transferability and stealthiness.
arXiv Detail & Related papers (2023-12-18T15:25:23Z) - CARNet: Collaborative Adversarial Resilience for Robust Underwater Image Enhancement and Perception [16.135354859458758]
We introduce a collaborative adversarial resilience network, dubbed CARNet, for underwater image enhancement and subsequent detection tasks.<n>In this work, we first introduce an invertible network with strong-perceptual abilities to isolate attacks from underwater images.<n>We also propose a bilevel attack optimization strategy to heighten the robustness of the network against different types of attacks.
arXiv Detail & Related papers (2023-09-03T06:52:05Z) - Content-based Unrestricted Adversarial Attack [53.181920529225906]
We propose a novel unrestricted attack framework called Content-based Unrestricted Adversarial Attack.
By leveraging a low-dimensional manifold that represents natural images, we map the images onto the manifold and optimize them along its adversarial direction.
arXiv Detail & Related papers (2023-05-18T02:57:43Z) - Deep Bayesian Image Set Classification: A Defence Approach against
Adversarial Attacks [32.48820298978333]
Deep neural networks (DNNs) are susceptible to be fooled with nearly high confidence by an adversary.
In practice, the vulnerability of deep learning systems against carefully perturbed images, known as adversarial examples, poses a dire security threat in the physical world applications.
We propose a robust deep Bayesian image set classification as a defence framework against a broad range of adversarial attacks.
arXiv Detail & Related papers (2021-08-23T14:52:44Z) - Deep Image Destruction: A Comprehensive Study on Vulnerability of Deep
Image-to-Image Models against Adversarial Attacks [104.8737334237993]
We present comprehensive investigations into the vulnerability of deep image-to-image models to adversarial attacks.
For five popular image-to-image tasks, 16 deep models are analyzed from various standpoints.
We show that unlike in image classification tasks, the performance degradation on image-to-image tasks can largely differ depending on various factors.
arXiv Detail & Related papers (2021-04-30T14:20:33Z) - Towards Achieving Adversarial Robustness by Enforcing Feature
Consistency Across Bit Planes [51.31334977346847]
We train networks to form coarse impressions based on the information in higher bit planes, and use the lower bit planes only to refine their prediction.
We demonstrate that, by imposing consistency on the representations learned across differently quantized images, the adversarial robustness of networks improves significantly.
arXiv Detail & Related papers (2020-04-01T09:31:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.