Perceptual Hash Inversion Attacks on Image-Based Sexual Abuse Removal Tools
- URL: http://arxiv.org/abs/2412.06056v1
- Date: Sun, 08 Dec 2024 20:23:16 GMT
- Title: Perceptual Hash Inversion Attacks on Image-Based Sexual Abuse Removal Tools
- Authors: Sophie Hawkes, Christian Weinert, Teresa Almeida, Maryam Mehrnezhad,
- Abstract summary: We show that perceptual hashing, crucial for detecting and removing image-based sexual abuse online, faces vulnerabilities from low-budget inversion attacks based on generative AI.
We advocate to implement secure hash matching in IBSA removal tools to mitigate potentially fatal consequences.
- Score: 6.485652681645558
- License:
- Abstract: We show that perceptual hashing, crucial for detecting and removing image-based sexual abuse (IBSA) online, faces vulnerabilities from low-budget inversion attacks based on generative AI. This jeopardizes the privacy of users, especially vulnerable groups. We advocate to implement secure hash matching in IBSA removal tools to mitigate potentially fatal consequences.
Related papers
- IDU-Detector: A Synergistic Framework for Robust Masquerader Attack Detection [3.3821216642235608]
In the digital age, users store personal data in corporate databases, making data security central to enterprise management.
Given the extensive attack surface, assets face challenges like weak authentication, vulnerabilities, and malware.
We introduce the IDU-Detector, integrating Intrusion Detection Systems (IDS) with User and Entity Behavior Analytics (UEBA)
This integration monitors unauthorized access, bridges system gaps, ensures continuous monitoring, and enhances threat identification.
arXiv Detail & Related papers (2024-11-09T13:03:29Z) - ID-Guard: A Universal Framework for Combating Facial Manipulation via Breaking Identification [60.73617868629575]
misuse of deep learning-based facial manipulation poses a potential threat to civil rights.
To prevent this fraud at its source, proactive defense technology was proposed to disrupt the manipulation process.
We propose a novel universal framework for combating facial manipulation, called ID-Guard.
arXiv Detail & Related papers (2024-09-20T09:30:08Z) - Protecting Onion Service Users Against Phishing [1.6435014180036467]
Phishing websites are a common phenomenon among Tor onion services.
phishers exploit that it is tremendously difficult to distinguish phishing from authentic onion domain names.
Operators of onion services devised several strategies to protect their users against phishing.
None protect users against phishing without producing traces about visited services.
arXiv Detail & Related papers (2024-08-14T19:51:30Z) - Rethinking the Vulnerabilities of Face Recognition Systems:From a Practical Perspective [53.24281798458074]
Face Recognition Systems (FRS) have increasingly integrated into critical applications, including surveillance and user authentication.
Recent studies have revealed vulnerabilities in FRS to adversarial (e.g., adversarial patch attacks) and backdoor attacks (e.g., training data poisoning)
arXiv Detail & Related papers (2024-05-21T13:34:23Z) - Privacy-preserving Optics for Enhancing Protection in Face De-identification [60.110274007388135]
We propose a hardware-level face de-identification method to solve this vulnerability.
We also propose an anonymization framework that generates a new face using the privacy-preserving image, face heatmap, and a reference face image from a public dataset as input.
arXiv Detail & Related papers (2024-03-31T19:28:04Z) - Poisoned Forgery Face: Towards Backdoor Attacks on Face Forgery
Detection [62.595450266262645]
This paper introduces a novel and previously unrecognized threat in face forgery detection scenarios caused by backdoor attack.
By embedding backdoors into models, attackers can deceive detectors into producing erroneous predictions for forged faces.
We propose emphPoisoned Forgery Face framework, which enables clean-label backdoor attacks on face forgery detectors.
arXiv Detail & Related papers (2024-02-18T06:31:05Z) - Exploiting and Defending Against the Approximate Linearity of Apple's
NeuralHash [5.3888140834268246]
Apple's NeuralHash aims to detect the presence of illegal content on users' devices without compromising consumer privacy.
We make the surprising discovery that NeuralHash is approximately linear, which inspires the development of novel black-box attacks.
We propose a simple fix using classical cryptographic standards.
arXiv Detail & Related papers (2022-07-28T17:45:01Z) - BadHash: Invisible Backdoor Attacks against Deep Hashing with Clean
Label [20.236328601459203]
We propose BadHash, the first generative-based imperceptible backdoor attack against deep hashing.
We show that BadHash can generate imperceptible poisoned samples with strong attack ability and transferability over state-of-the-art deep hashing schemes.
arXiv Detail & Related papers (2022-07-01T09:10:25Z) - Learning to Break Deep Perceptual Hashing: The Use Case NeuralHash [29.722113621868978]
Apple recently revealed its deep perceptual hashing system NeuralHash to detect child sexual abuse material.
Public criticism arose regarding the protection of user privacy and the system's reliability.
We show that current deep perceptual hashing may not be robust.
arXiv Detail & Related papers (2021-11-12T09:49:27Z) - Backdoor Attack on Hash-based Image Retrieval via Clean-label Data
Poisoning [54.15013757920703]
We propose the confusing perturbations-induced backdoor attack (CIBA)
It injects a small number of poisoned images with the correct label into the training data.
We have conducted extensive experiments to verify the effectiveness of our proposed CIBA.
arXiv Detail & Related papers (2021-09-18T07:56:59Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.