Adversarial collision attacks on image hashing functions
- URL: http://arxiv.org/abs/2011.09473v1
- Date: Wed, 18 Nov 2020 18:59:02 GMT
- Title: Adversarial collision attacks on image hashing functions
- Authors: Brian Dolhansky, Cristian Canton Ferrer
- Abstract summary: We show that it is possible to modify an image to produce an unrelated hash, and an exact hash collision can be produced via minuscule perturbations.
In a white box setting, these collisions can be replicated across nearly every image pair and hash type.
We offer several potential mitigations to gradient-based image hash attacks.
- Score: 9.391375268580806
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hashing images with a perceptual algorithm is a common approach to solving
duplicate image detection problems. However, perceptual image hashing
algorithms are differentiable, and are thus vulnerable to gradient-based
adversarial attacks. We demonstrate that not only is it possible to modify an
image to produce an unrelated hash, but an exact image hash collision between a
source and target image can be produced via minuscule adversarial
perturbations. In a white box setting, these collisions can be replicated
across nearly every image pair and hash type (including both deep and
non-learned hashes). Furthermore, by attacking points other than the output of
a hashing function, an attacker can avoid having to know the details of a
particular algorithm, resulting in collisions that transfer across different
hash sizes or model architectures. Using these techniques, an adversary can
poison the image lookup table of a duplicate image detection service, resulting
in undefined or unwanted behavior. Finally, we offer several potential
mitigations to gradient-based image hash attacks.
Related papers
- Human-imperceptible, Machine-recognizable Images [76.01951148048603]
A major conflict is exposed relating to software engineers between better developing AI systems and distancing from the sensitive training data.
This paper proposes an efficient privacy-preserving learning paradigm, where images are encrypted to become human-imperceptible, machine-recognizable''
We show that the proposed paradigm can ensure the encrypted images have become human-imperceptible while preserving machine-recognizable information.
arXiv Detail & Related papers (2023-06-06T13:41:37Z) - Exploiting and Defending Against the Approximate Linearity of Apple's
NeuralHash [5.3888140834268246]
Apple's NeuralHash aims to detect the presence of illegal content on users' devices without compromising consumer privacy.
We make the surprising discovery that NeuralHash is approximately linear, which inspires the development of novel black-box attacks.
We propose a simple fix using classical cryptographic standards.
arXiv Detail & Related papers (2022-07-28T17:45:01Z) - Self-Distilled Hashing for Deep Image Retrieval [25.645550298697938]
In hash-based image retrieval systems, transformed input from the original usually generates different codes.
We propose a novel self-distilled hashing scheme to minimize the discrepancy while exploiting the potential of augmented data.
We also introduce hash proxy-based similarity learning and binary cross entropy-based quantization loss to provide fine quality hash codes.
arXiv Detail & Related papers (2021-12-16T12:01:50Z) - Identification of Attack-Specific Signatures in Adversarial Examples [62.17639067715379]
We show that different attack algorithms produce adversarial examples which are distinct not only in their effectiveness but also in how they qualitatively affect their victims.
Our findings suggest that prospective adversarial attacks should be compared not only via their success rates at fooling models but also via deeper downstream effects they have on victims.
arXiv Detail & Related papers (2021-10-13T15:40:48Z) - State of the Art: Image Hashing [0.0]
Perceptual image hashing methods are often applied in various objectives, such as image retrieval, finding duplicate or near-duplicate images, and finding similar images from large-scale image content.
The main challenge in image hashing techniques is robust feature extraction, which generates the same or similar hashes in images that are visually identical.
In this article, we present a short review of the state-of-the-art traditional perceptual hashing and deep learning-based perceptual hashing methods, identifying the best approaches.
arXiv Detail & Related papers (2021-08-26T13:53:30Z) - Prototype-supervised Adversarial Network for Targeted Attack of Deep
Hashing [65.32148145602865]
deep hashing networks are vulnerable to adversarial examples.
We propose a novel prototype-supervised adversarial network (ProS-GAN)
To the best of our knowledge, this is the first generation-based method to attack deep hashing networks.
arXiv Detail & Related papers (2021-05-17T00:31:37Z) - MixNet for Generalized Face Presentation Attack Detection [63.35297510471997]
We have proposed a deep learning-based network termed as textitMixNet to detect presentation attacks.
The proposed algorithm utilizes state-of-the-art convolutional neural network architectures and learns the feature mapping for each attack category.
arXiv Detail & Related papers (2020-10-25T23:01:13Z) - Deep Reinforcement Learning with Label Embedding Reward for Supervised
Image Hashing [85.84690941656528]
We introduce a novel decision-making approach for deep supervised hashing.
We learn a deep Q-network with a novel label embedding reward defined by Bose-Chaudhuri-Hocquenghem codes.
Our approach outperforms state-of-the-art supervised hashing methods under various code lengths.
arXiv Detail & Related papers (2020-08-10T09:17:20Z) - Targeted Attack for Deep Hashing based Retrieval [57.582221494035856]
We propose a novel method, dubbed deep hashing targeted attack (DHTA), to study the targeted attack on such retrieval.
We first formulate the targeted attack as a point-to-set optimization, which minimizes the average distance between the hash code of an adversarial example and those of a set of objects with the target label.
To balance the performance and perceptibility, we propose to minimize the Hamming distance between the hash code of the adversarial example and the anchor code under the $ellinfty$ restriction on the perturbation.
arXiv Detail & Related papers (2020-04-15T08:36:58Z) - Detecting Patch Adversarial Attacks with Image Residuals [9.169947558498535]
A discriminator is trained to distinguish between clean and adversarial samples.
We show that the obtained residuals act as a digital fingerprint for adversarial attacks.
Results show that the proposed detection method generalizes to previously unseen, stronger attacks.
arXiv Detail & Related papers (2020-02-28T01:28:22Z) - Towards Evaluating Gaussian Blurring in Perceptual Hashing as a Facial
Image Filter [0.0]
Perceptual hashing is often used to detect whether two images are identical.
We propose to experiment with effect of gaussian blurring in perceptual hashing for detecting misuse of personal images specifically for face images.
arXiv Detail & Related papers (2020-02-01T04:52:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.