Detecting and Segmenting Adversarial Graphics Patterns from Images
- URL: http://arxiv.org/abs/2108.09383v1
- Date: Fri, 20 Aug 2021 21:54:39 GMT
- Title: Detecting and Segmenting Adversarial Graphics Patterns from Images
- Authors: Xiangyu Qu (1) and Stanley H. Chan (1) ((1) Purdue University)
- Abstract summary: We formulate the defense against such attacks as an artificial graphics pattern segmentation problem.
We evaluate the efficacy of several segmentation algorithms and, based on observation of their performance, propose a new method tailored to this specific problem.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Adversarial attacks pose a substantial threat to computer vision system
security, but the social media industry constantly faces another form of
"adversarial attack" in which the hackers attempt to upload inappropriate
images and fool the automated screening systems by adding artificial graphics
patterns. In this paper, we formulate the defense against such attacks as an
artificial graphics pattern segmentation problem. We evaluate the efficacy of
several segmentation algorithms and, based on observation of their performance,
propose a new method tailored to this specific problem. Extensive experiments
show that the proposed method outperforms the baselines and has a promising
generalization capability, which is the most crucial aspect in segmenting
artificial graphics patterns.
Related papers
- MirrorCheck: Efficient Adversarial Defense for Vision-Language Models [55.73581212134293]
We propose a novel, yet elegantly simple approach for detecting adversarial samples in Vision-Language Models.
Our method leverages Text-to-Image (T2I) models to generate images based on captions produced by target VLMs.
Empirical evaluations conducted on different datasets validate the efficacy of our approach.
arXiv Detail & Related papers (2024-06-13T15:55:04Z) - Adversary-Robust Graph-Based Learning of WSIs [2.9998889086656586]
Whole slide images (WSIs) are high-resolution, digitized versions of tissue samples mounted on glass slides, scanned using sophisticated imaging equipment.
The digital analysis of WSIs presents unique challenges due to their gigapixel size and multi-resolution storage format.
We develop a novel and innovative graph-based model which utilizes GNN to extract features from the graph representation of WSIs.
arXiv Detail & Related papers (2024-03-21T15:37:37Z) - Everything Perturbed All at Once: Enabling Differentiable Graph Attacks [61.61327182050706]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks.
We propose a novel attack method called Differentiable Graph Attack (DGA) to efficiently generate effective attacks.
Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint.
arXiv Detail & Related papers (2023-08-29T20:14:42Z) - Uncertainty-based Detection of Adversarial Attacks in Semantic
Segmentation [16.109860499330562]
We introduce an uncertainty-based approach for the detection of adversarial attacks in semantic segmentation.
We demonstrate the ability of our approach to detect perturbed images across multiple types of adversarial attacks.
arXiv Detail & Related papers (2023-05-22T08:36:35Z) - Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face
Recognition [111.1952945740271]
Adversarial Attributes (Adv-Attribute) is designed to generate inconspicuous and transferable attacks on face recognition.
Experiments on the FFHQ and CelebA-HQ datasets show that the proposed Adv-Attribute method achieves the state-of-the-art attacking success rates.
arXiv Detail & Related papers (2022-10-13T09:56:36Z) - Scale-free Photo-realistic Adversarial Pattern Attack [20.818415741759512]
Generative Adversarial Networks (GAN) can partially address this problem by synthesizing a more semantically meaningful texture pattern.
In this paper, we propose a scale-free generation-based attack algorithm that synthesizes semantically meaningful adversarial patterns globally to images with arbitrary scales.
arXiv Detail & Related papers (2022-08-12T11:25:39Z) - On the Real-World Adversarial Robustness of Real-Time Semantic
Segmentation Models for Autonomous Driving [59.33715889581687]
The existence of real-world adversarial examples (commonly in the form of patches) poses a serious threat for the use of deep learning models in safety-critical computer vision tasks.
This paper presents an evaluation of the robustness of semantic segmentation models when attacked with different types of adversarial patches.
A novel loss function is proposed to improve the capabilities of attackers in inducing a misclassification of pixels.
arXiv Detail & Related papers (2022-01-05T22:33:43Z) - Adversarial Attacks on Graph Classification via Bayesian Optimisation [25.781404695921122]
We present a novel optimisation-based attack method for graph classification models.
Our method is black-box, query-efficient and parsimonious with respect to the perturbation applied.
We empirically validate the effectiveness and flexibility of the proposed method on a wide range of graph classification tasks.
arXiv Detail & Related papers (2021-11-04T13:01:20Z) - Robustness and Generalization via Generative Adversarial Training [21.946687274313177]
We present Generative Adversarial Training, an approach to simultaneously improve the model's generalization to the test set and out-of-domain samples.
We show that our approach not only improves performance of the model on clean images and out-of-domain samples but also makes it robust against unforeseen attacks.
arXiv Detail & Related papers (2021-09-06T22:34:04Z) - Deep Image Destruction: A Comprehensive Study on Vulnerability of Deep
Image-to-Image Models against Adversarial Attacks [104.8737334237993]
We present comprehensive investigations into the vulnerability of deep image-to-image models to adversarial attacks.
For five popular image-to-image tasks, 16 deep models are analyzed from various standpoints.
We show that unlike in image classification tasks, the performance degradation on image-to-image tasks can largely differ depending on various factors.
arXiv Detail & Related papers (2021-04-30T14:20:33Z) - A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
and Learning [122.49765136434353]
We present an effective method, called Hamiltonian Monte Carlo with Accumulated Momentum (HMCAM), aiming to generate a sequence of adversarial examples.
We also propose a new generative method called Contrastive Adversarial Training (CAT), which approaches equilibrium distribution of adversarial examples.
Both quantitative and qualitative analysis on several natural image datasets and practical systems have confirmed the superiority of the proposed algorithm.
arXiv Detail & Related papers (2020-10-15T16:07:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.