A Study for Universal Adversarial Attacks on Texture Recognition
- URL: http://arxiv.org/abs/2010.01506v1
- Date: Sun, 4 Oct 2020 08:11:11 GMT
- Title: A Study for Universal Adversarial Attacks on Texture Recognition
- Authors: Yingpeng Deng and Lina J. Karam
- Abstract summary: We show that there exist small image-agnostic/univesal perturbations that can fool the deep learning models with more than 80% of testing fooling rates on all tested texture datasets.
The computed perturbations using various attack methods on the tested datasets are generally quasi-imperceptible, containing structured patterns with low, middle and high frequency components.
- Score: 19.79803434998116
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given the outstanding progress that convolutional neural networks (CNNs) have
made on natural image classification and object recognition problems, it is
shown that deep learning methods can achieve very good recognition performance
on many texture datasets. However, while CNNs for natural image
classification/object recognition tasks have been revealed to be highly
vulnerable to various types of adversarial attack methods, the robustness of
deep learning methods for texture recognition is yet to be examined. In our
paper, we show that there exist small image-agnostic/univesal perturbations
that can fool the deep learning models with more than 80\% of testing fooling
rates on all tested texture datasets. The computed perturbations using various
attack methods on the tested datasets are generally quasi-imperceptible,
containing structured patterns with low, middle and high frequency components.
Related papers
- Are Deep Learning Models Robust to Partial Object Occlusion in Visual Recognition Tasks? [4.9260675787714]
Image classification models, including convolutional neural networks (CNNs), perform well on a variety of classification tasks but struggle under partial occlusion.
We contribute the Image Recognition Under Occlusion (IRUO) dataset, based on the recently developed Occluded Video Instance (IRUO) dataset (arXiv:2102.01558)
We find that modern CNN-based models show improved recognition accuracy on occluded images compared to earlier CNN-based models, and ViT-based models are more accurate than CNN-based models on occluded images.
arXiv Detail & Related papers (2024-09-16T23:21:22Z) - UniForensics: Face Forgery Detection via General Facial Representation [60.5421627990707]
High-level semantic features are less susceptible to perturbations and not limited to forgery-specific artifacts, thus having stronger generalization.
We introduce UniForensics, a novel deepfake detection framework that leverages a transformer-based video network, with a meta-functional face classification for enriched facial representation.
arXiv Detail & Related papers (2024-07-26T20:51:54Z) - CLIPC8: Face liveness detection algorithm based on image-text pairs and
contrastive learning [3.90443799528247]
We propose a face liveness detection method based on image-text pairs and contrastive learning.
The proposed method is capable of effectively detecting specific liveness attack behaviors in certain scenarios.
It is also effective in detecting traditional liveness attack methods, such as printing photo attacks and screen remake attacks.
arXiv Detail & Related papers (2023-11-29T12:21:42Z) - Detecting Generated Images by Real Images Only [64.12501227493765]
Existing generated image detection methods detect visual artifacts in generated images or learn discriminative features from both real and generated images by massive training.
This paper approaches the generated image detection problem from a new perspective: Start from real images.
By finding the commonality of real images and mapping them to a dense subspace in feature space, the goal is that generated images, regardless of their generative model, are then projected outside the subspace.
arXiv Detail & Related papers (2023-11-02T03:09:37Z) - CrossDF: Improving Cross-Domain Deepfake Detection with Deep Information Decomposition [53.860796916196634]
We propose a Deep Information Decomposition (DID) framework to enhance the performance of Cross-dataset Deepfake Detection (CrossDF)
Unlike most existing deepfake detection methods, our framework prioritizes high-level semantic features over specific visual artifacts.
It adaptively decomposes facial features into deepfake-related and irrelevant information, only using the intrinsic deepfake-related information for real/fake discrimination.
arXiv Detail & Related papers (2023-09-30T12:30:25Z) - Parents and Children: Distinguishing Multimodal DeepFakes from Natural Images [60.34381768479834]
Recent advancements in diffusion models have enabled the generation of realistic deepfakes from textual prompts in natural language.
We pioneer a systematic study on deepfake detection generated by state-of-the-art diffusion models.
arXiv Detail & Related papers (2023-04-02T10:25:09Z) - Detecting Adversaries, yet Faltering to Noise? Leveraging Conditional
Variational AutoEncoders for Adversary Detection in the Presence of Noisy
Images [0.7734726150561086]
Conditional Variational AutoEncoders (CVAE) are surprisingly good at detecting imperceptible image perturbations.
We show how CVAEs can be effectively used to detect adversarial attacks on image classification networks.
arXiv Detail & Related papers (2021-11-28T20:36:27Z) - SCARF: Self-Supervised Contrastive Learning using Random Feature
Corruption [72.35532598131176]
We propose SCARF, a technique for contrastive learning, where views are formed by corrupting a random subset of features.
We show that SCARF complements existing strategies and outperforms alternatives like autoencoders.
arXiv Detail & Related papers (2021-06-29T08:08:33Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Fighting deepfakes by detecting GAN DCT anomalies [0.0]
State-of-the-art algorithms employ deep neural networks to detect fake contents.
A new fast detection method able to discriminate Deepfake images with high precision is proposed.
The method is innovative, exceeds the state-of-the-art and also gives many insights in terms of explainability.
arXiv Detail & Related papers (2021-01-24T19:45:11Z) - Towards Imperceptible Universal Attacks on Texture Recognition [19.79803434998116]
We show that limiting the perturbation's $l_p$ norm in the spatial domain may not be a suitable way to restrict the perceptibility of universal adversarial perturbations for texture images.
We propose a frequency-tuned universal attack method to compute universal perturbations in the frequency domain.
arXiv Detail & Related papers (2020-11-24T08:33:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.