Transparency Attacks: How Imperceptible Image Layers Can Fool AI
Perception
- URL: http://arxiv.org/abs/2401.15817v1
- Date: Mon, 29 Jan 2024 00:52:01 GMT
- Title: Transparency Attacks: How Imperceptible Image Layers Can Fool AI
Perception
- Authors: Forrest McKee, David Noever
- Abstract summary: This paper investigates a novel algorithmic vulnerability when imperceptible image layers confound vision models into arbitrary label assignments and captions.
We explore image preprocessing methods to introduce stealth transparency, which triggers AI misinterpretation of what the human eye perceives.
The stealth transparency confounds established vision systems, including evading facial recognition and surveillance, digital watermarking, content filtering, dataset curating, automotive and drone autonomy, forensic evidence tampering, and retail product misclassifying.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper investigates a novel algorithmic vulnerability when imperceptible
image layers confound multiple vision models into arbitrary label assignments
and captions. We explore image preprocessing methods to introduce stealth
transparency, which triggers AI misinterpretation of what the human eye
perceives. The research compiles a broad attack surface to investigate the
consequences ranging from traditional watermarking, steganography, and
background-foreground miscues. We demonstrate dataset poisoning using the
attack to mislabel a collection of grayscale landscapes and logos using either
a single attack layer or randomly selected poisoning classes. For example, a
military tank to the human eye is a mislabeled bridge to object classifiers
based on convolutional networks (YOLO, etc.) and vision transformers (ViT,
GPT-Vision, etc.). A notable attack limitation stems from its dependency on the
background (hidden) layer in grayscale as a rough match to the transparent
foreground image that the human eye perceives. This dependency limits the
practical success rate without manual tuning and exposes the hidden layers when
placed on the opposite display theme (e.g., light background, light transparent
foreground visible, works best against a light theme image viewer or browser).
The stealth transparency confounds established vision systems, including
evading facial recognition and surveillance, digital watermarking, content
filtering, dataset curating, automotive and drone autonomy, forensic evidence
tampering, and retail product misclassifying. This method stands in contrast to
traditional adversarial attacks that typically focus on modifying pixel values
in ways that are either slightly perceptible or entirely imperceptible for both
humans and machines.
Related papers
- ID-Guard: A Universal Framework for Combating Facial Manipulation via Breaking Identification [60.73617868629575]
misuse of deep learning-based facial manipulation poses a potential threat to civil rights.
To prevent this fraud at its source, proactive defense technology was proposed to disrupt the manipulation process.
We propose a novel universal framework for combating facial manipulation, called ID-Guard.
arXiv Detail & Related papers (2024-09-20T09:30:08Z) - Semantic Contextualization of Face Forgery: A New Definition, Dataset, and Detection Method [77.65459419417533]
We put face forgery in a semantic context and define that computational methods that alter semantic face attributes are sources of face forgery.
We construct a large face forgery image dataset, where each image is associated with a set of labels organized in a hierarchical graph.
We propose a semantics-oriented face forgery detection method that captures label relations and prioritizes the primary task.
arXiv Detail & Related papers (2024-05-14T10:24:19Z) - Human-imperceptible, Machine-recognizable Images [76.01951148048603]
A major conflict is exposed relating to software engineers between better developing AI systems and distancing from the sensitive training data.
This paper proposes an efficient privacy-preserving learning paradigm, where images are encrypted to become human-imperceptible, machine-recognizable''
We show that the proposed paradigm can ensure the encrypted images have become human-imperceptible while preserving machine-recognizable information.
arXiv Detail & Related papers (2023-06-06T13:41:37Z) - AdvRain: Adversarial Raindrops to Attack Camera-based Smart Vision
Systems [5.476763798688862]
"printed adversarial attacks", known as physical adversarial attacks, can successfully mislead perception models.
We propose a camera-based adversarial attack capable of fooling camera-based perception systems over all objects of the same class.
We achieve a drop in average model accuracy of more than $45%$ and $40%$ on VGG19 for ImageNet and Resnet34 for Caltech.
arXiv Detail & Related papers (2023-03-02T15:14:46Z) - Two Souls in an Adversarial Image: Towards Universal Adversarial Example
Detection using Multi-view Inconsistency [10.08837640910022]
In evasion attacks against deep neural networks (DNN), the attacker generates adversarial instances that are visually indistinguishable from benign samples.
We propose a novel multi-view adversarial image detector, namely Argos, based on a novel observation.
Argos significantly outperforms two representative adversarial detectors in both detection accuracy and robustness.
arXiv Detail & Related papers (2021-09-25T23:47:13Z) - Attack to Fool and Explain Deep Networks [59.97135687719244]
We counter-argue by providing evidence of human-meaningful patterns in adversarial perturbations.
Our major contribution is a novel pragmatic adversarial attack that is subsequently transformed into a tool to interpret the visual models.
arXiv Detail & Related papers (2021-06-20T03:07:36Z) - A Study of Face Obfuscation in ImageNet [94.2949777826947]
In this paper, we explore image obfuscation in the ImageNet challenge.
Most categories in the ImageNet challenge are not people categories; nevertheless, many incidental people are in the images.
We benchmark various deep neural networks on face-blurred images and observe a disparate impact on different categories.
Results show that features learned on face-blurred images are equally transferable.
arXiv Detail & Related papers (2021-03-10T17:11:34Z) - Invisible Perturbations: Physical Adversarial Examples Exploiting the
Rolling Shutter Effect [16.876798038844445]
We generate, for the first time, physical adversarial examples that are invisible to human eyes.
We demonstrate how an attacker can craft a modulated light signal that adversarially illuminates a scene and causes targeted misclassifications.
We conduct a range of simulation and physical experiments with LEDs, demonstrating targeted attack rates up to 84%.
arXiv Detail & Related papers (2020-11-26T16:34:47Z) - Self-Supervised Linear Motion Deblurring [112.75317069916579]
Deep convolutional neural networks are state-of-the-art for image deblurring.
We present a differentiable reblur model for self-supervised motion deblurring.
Our experiments demonstrate that self-supervised single image deblurring is really feasible.
arXiv Detail & Related papers (2020-02-10T20:15:21Z) - Attacking Optical Character Recognition (OCR) Systems with Adversarial
Watermarks [22.751944254451875]
We propose a watermark attack method to produce natural distortion that is in the disguise of watermarks and evade human eyes' detection.
Experimental results show that watermark attacks can yield a set of natural adversarial examples attached with watermarks and attain similar attack performance to the state-of-the-art methods in different attack scenarios.
arXiv Detail & Related papers (2020-02-08T05:53:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.