Robustness of Practical Perceptual Hashing Algorithms to Hash-Evasion and Hash-Inversion Attacks
- URL: http://arxiv.org/abs/2406.00918v2
- Date: Thu, 05 Dec 2024 16:19:37 GMT
- Title: Robustness of Practical Perceptual Hashing Algorithms to Hash-Evasion and Hash-Inversion Attacks
- Authors: Jordan Madden, Moxanki Bhavsar, Lhamo Dorje, Xiaohua Li,
- Abstract summary: This paper assesses the security of three widely utilized PHAs - PhotoDNA, PDQ, and NeuralHash - against hash-evasion and hash-inversion attacks.<n>We provide an explanation for these differing results, highlighting that the inherent robustness is partially due to the random hash variations characteristic of PHAs.
- Score: 1.9186789478340778
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Perceptual hashing algorithms (PHAs) are widely used for identifying illegal online content and are thus integral to various sensitive applications. However, due to their hasty deployment in real-world scenarios, their adversarial security has not been thoroughly evaluated. This paper assesses the security of three widely utilized PHAs - PhotoDNA, PDQ, and NeuralHash - against hash-evasion and hash-inversion attacks. Contrary to existing literature, our findings indicate that these PHAs demonstrate significant robustness against such attacks. We provide an explanation for these differing results, highlighting that the inherent robustness is partially due to the random hash variations characteristic of PHAs. Additionally, we propose a defense method that enhances security by intentionally introducing perturbations into the hashes.
Related papers
- Clean Image May be Dangerous: Data Poisoning Attacks Against Deep Hashing [71.30876587855867]
We show that even clean query images can be dangerous, inducing malicious target retrieval results, like undesired or illegal images.
Specifically, we first train a surrogate model to simulate the behavior of the target deep hashing model.
Then, a strict gradient matching strategy is proposed to generate the poisoned images.
arXiv Detail & Related papers (2025-03-27T07:54:27Z) - Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - Bit-mask Robust Contrastive Knowledge Distillation for Unsupervised
Semantic Hashing [71.47723696190184]
We propose an innovative Bit-mask Robust Contrastive knowledge Distillation (BRCD) method for semantic hashing.
BRCD is specifically devised for the distillation of semantic hashing models.
arXiv Detail & Related papers (2024-03-10T03:33:59Z) - Anomaly Unveiled: Securing Image Classification against Adversarial
Patch Attacks [3.6275442368775512]
Adversarial patch attacks pose a significant threat to the practical deployment of deep learning systems.
In this paper, we investigate the behavior of adversarial patches as anomalies within the distribution of image information.
Our proposed defense mechanism utilizes a clustering-based technique called DBSCAN to isolate anomalous image segments.
arXiv Detail & Related papers (2024-02-09T08:52:47Z) - A Hybrid Image Encryption Scheme based on Chaos and a DPA-Resistant Sbox [0.0]
Recently, Khalid M. Hosny presented an image encryption scheme based on 6D hyper chaotic mapping and Q-Fibonacci matrix.
In this paper, based on Khaled Hosny's design, a new effective design is presented that has improved encryption security and efficiency.
arXiv Detail & Related papers (2023-12-23T15:26:15Z) - VA3: Virtually Assured Amplification Attack on Probabilistic Copyright Protection for Text-to-Image Generative Models [27.77911368516792]
We introduce Virtually Assured Amplification Attack (VA3), a novel online attack framework.
VA3 amplifies the probability of generating infringing content on the sustained interactions with generative models.
These findings highlight the potential risk of implementing probabilistic copyright protection in practical applications of text-to-image generative models.
arXiv Detail & Related papers (2023-11-29T12:10:00Z) - Semantic-Aware Adversarial Training for Reliable Deep Hashing Retrieval [26.17466361744519]
Adversarial examples pose a security threat to deep hashing models.
Adversarial examples fabricated by maximizing the Hamming distance between the hash codes of adversarial samples and mainstay features.
For the first time, we formulate the formalized adversarial training of deep hashing into a unified minimax structure.
arXiv Detail & Related papers (2023-10-23T07:21:40Z) - Reliable and Efficient Evaluation of Adversarial Robustness for Deep
Hashing-Based Retrieval [20.3473596316839]
We propose a novel Pharos-guided Attack, dubbed PgA, to evaluate the adversarial robustness of deep hashing networks reliably and efficiently.
PgA can directly conduct a reliable and efficient attack on deep hashing-based retrieval by maximizing the similarity between the hash code of the adversarial example and the pharos code.
arXiv Detail & Related papers (2023-03-22T15:36:19Z) - Mitigating Adversarial Gray-Box Attacks Against Phishing Detectors [10.589772769069592]
We propose a set of Gray-Box attacks on PDs that an adversary may use depending on the knowledge that he has about the PD.
We show that these attacks severely degrade the effectiveness of several existing PDs.
We then propose the concept of operation chains that iteratively map an original set of features to a new set of features and develop the "Protective Operation Chain" algorithm.
arXiv Detail & Related papers (2022-12-11T00:25:45Z) - Defending From Physically-Realizable Adversarial Attacks Through
Internal Over-Activation Analysis [61.68061613161187]
Z-Mask is a robust and effective strategy to improve the robustness of convolutional networks against adversarial attacks.
The presented defense relies on specific Z-score analysis performed on the internal network features to detect and mask the pixels corresponding to adversarial objects in the input image.
Additional experiments showed that Z-Mask is also robust against possible defense-aware attacks.
arXiv Detail & Related papers (2022-03-14T17:41:46Z) - Policy Smoothing for Provably Robust Reinforcement Learning [109.90239627115336]
We study the provable robustness of reinforcement learning against norm-bounded adversarial perturbations of the inputs.
We generate certificates that guarantee that the total reward obtained by the smoothed policy will not fall below a certain threshold under a norm-bounded adversarial of perturbation the input.
arXiv Detail & Related papers (2021-06-21T21:42:08Z) - Prototype-supervised Adversarial Network for Targeted Attack of Deep
Hashing [65.32148145602865]
deep hashing networks are vulnerable to adversarial examples.
We propose a novel prototype-supervised adversarial network (ProS-GAN)
To the best of our knowledge, this is the first generation-based method to attack deep hashing networks.
arXiv Detail & Related papers (2021-05-17T00:31:37Z) - MixNet for Generalized Face Presentation Attack Detection [63.35297510471997]
We have proposed a deep learning-based network termed as textitMixNet to detect presentation attacks.
The proposed algorithm utilizes state-of-the-art convolutional neural network architectures and learns the feature mapping for each attack category.
arXiv Detail & Related papers (2020-10-25T23:01:13Z) - CIMON: Towards High-quality Hash Codes [63.37321228830102]
We propose a new method named textbfComprehensive stextbfImilarity textbfMining and ctextbfOnsistency leartextbfNing (CIMON)
First, we use global refinement and similarity statistical distribution to obtain reliable and smooth guidance. Second, both semantic and contrastive consistency learning are introduced to derive both disturb-invariant and discriminative hash codes.
arXiv Detail & Related papers (2020-10-15T14:47:14Z) - Reinforcing Short-Length Hashing [61.75883795807109]
Existing methods have poor performance in retrieval using an extremely short-length hash code.
In this study, we propose a novel reinforcing short-length hashing (RSLH)
In this proposed RSLH, mutual reconstruction between the hash representation and semantic labels is performed to preserve the semantic information.
Experiments on three large-scale image benchmarks demonstrate the superior performance of RSLH under various short-length hashing scenarios.
arXiv Detail & Related papers (2020-04-24T02:23:52Z) - Targeted Attack for Deep Hashing based Retrieval [57.582221494035856]
We propose a novel method, dubbed deep hashing targeted attack (DHTA), to study the targeted attack on such retrieval.
We first formulate the targeted attack as a point-to-set optimization, which minimizes the average distance between the hash code of an adversarial example and those of a set of objects with the target label.
To balance the performance and perceptibility, we propose to minimize the Hamming distance between the hash code of the adversarial example and the anchor code under the $ellinfty$ restriction on the perturbation.
arXiv Detail & Related papers (2020-04-15T08:36:58Z) - A Survey on Deep Hashing Methods [52.326472103233854]
Nearest neighbor search aims to obtain the samples in the database with the smallest distances from them to the queries.
With the development of deep learning, deep hashing methods show more advantages than traditional methods.
Deep supervised hashing is categorized into pairwise methods, ranking-based methods, pointwise methods and quantization.
Deep unsupervised hashing is categorized into similarity reconstruction-based methods, pseudo-label-based methods and prediction-free self-supervised learning-based methods.
arXiv Detail & Related papers (2020-03-04T08:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.