Re-purposing Perceptual Hashing based Client Side Scanning for Physical
Surveillance
- URL: http://arxiv.org/abs/2212.04107v1
- Date: Thu, 8 Dec 2022 06:52:14 GMT
- Title: Re-purposing Perceptual Hashing based Client Side Scanning for Physical
Surveillance
- Authors: Ashish Hooda, Andrey Labunets, Tadayoshi Kohno, Earlence Fernandes
- Abstract summary: We experimentally characterize the potential for one type of misuse -- attackers manipulating the content scanning system to perform physical surveillance on target locations.
Our contributions are threefold: (1) we offer a definition of physical surveillance in the context of client-side image scanning systems; (2) we experimentally characterize this risk and create a surveillance algorithm that achieves physical surveillance rates of >40% by poisoning 5% of the perceptual hash database.
- Score: 11.32995543117422
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Content scanning systems employ perceptual hashing algorithms to scan user
content for illegal material, such as child pornography or terrorist
recruitment flyers. Perceptual hashing algorithms help determine whether two
images are visually similar while preserving the privacy of the input images.
Several efforts from industry and academia propose to conduct content scanning
on client devices such as smartphones due to the impending roll out of
end-to-end encryption that will make server-side content scanning difficult.
However, these proposals have met with strong criticism because of the
potential for the technology to be misused and re-purposed. Our work informs
this conversation by experimentally characterizing the potential for one type
of misuse -- attackers manipulating the content scanning system to perform
physical surveillance on target locations. Our contributions are threefold: (1)
we offer a definition of physical surveillance in the context of client-side
image scanning systems; (2) we experimentally characterize this risk and create
a surveillance algorithm that achieves physical surveillance rates of >40% by
poisoning 5% of the perceptual hash database; (3) we experimentally study the
trade-off between the robustness of client-side image scanning systems and
surveillance, showing that more robust detection of illegal material leads to
increased potential for physical surveillance.
Related papers
- A Machine Learning-Based Secure Face Verification Scheme and Its Applications to Digital Surveillance [0.9208007322096533]
Most real-world recognition systems ignore the importance of protecting the identity-sensitive facial images that are used for verification.
We use the DeepID2 convolutional neural network to extract the features of a facial image and an EM algorithm to solve the facial verification problem.
We develop three face verification systems for surveillance (or entrance) control of a local community based on three levels of privacy concerns.
arXiv Detail & Related papers (2024-10-29T12:25:00Z) - Privacy-Preserving Deep Learning Using Deformable Operators for Secure Task Learning [14.187385349716518]
Existing methods for privacy preservation rely on image encryption or perceptual transformation approaches.
We propose a novel Privacy-Preserving framework that uses a set of deformable operators for secure task learning.
arXiv Detail & Related papers (2024-04-08T19:46:20Z) - Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - Human-imperceptible, Machine-recognizable Images [76.01951148048603]
A major conflict is exposed relating to software engineers between better developing AI systems and distancing from the sensitive training data.
This paper proposes an efficient privacy-preserving learning paradigm, where images are encrypted to become human-imperceptible, machine-recognizable''
We show that the proposed paradigm can ensure the encrypted images have become human-imperceptible while preserving machine-recognizable information.
arXiv Detail & Related papers (2023-06-06T13:41:37Z) - Synthetic ID Card Image Generation for Improving Presentation Attack
Detection [12.232059909207578]
This work explores three methods for synthetically generating ID card images to increase the amount of data while training fraud-detection networks.
Our results indicate that databases can be supplemented with synthetic images without any loss in performance for the print/scan Presentation Attack Instrument Species (PAIS) and a loss in performance of 1% for the screen capture PAIS.
arXiv Detail & Related papers (2022-10-31T19:07:30Z) - Adversarially-Aware Robust Object Detector [85.10894272034135]
We propose a Robust Detector (RobustDet) based on adversarially-aware convolution to disentangle gradients for model learning on clean and adversarial images.
Our model effectively disentangles gradients and significantly enhances the detection robustness with maintaining the detection ability on clean images.
arXiv Detail & Related papers (2022-07-13T13:59:59Z) - Psychophysical Evaluation of Human Performance in Detecting Digital Face
Image Manipulations [14.63266615325105]
This work introduces a web-based, remote visual discrimination experiment on the basis of principles adopted from the field of psychophysics.
We examine human proficiency in detecting different types of digitally manipulated face images, specifically face swapping, morphing, and retouching.
arXiv Detail & Related papers (2022-01-28T12:45:33Z) - Privacy-Preserving Image Acquisition Using Trainable Optical Kernel [50.1239616836174]
We propose a trainable image acquisition method that removes the sensitive identity revealing information in the optical domain before it reaches the image sensor.
As the sensitive content is suppressed before it reaches the image sensor, it does not enter the digital domain therefore is unretrievable by any sort of privacy attack.
arXiv Detail & Related papers (2021-06-28T11:08:14Z) - Exploring Adversarial Robustness of Multi-Sensor Perception Systems in
Self Driving [87.3492357041748]
In this paper, we showcase practical susceptibilities of multi-sensor detection by placing an adversarial object on top of a host vehicle.
Our experiments demonstrate that successful attacks are primarily caused by easily corrupted image features.
Towards more robust multi-modal perception systems, we show that adversarial training with feature denoising can boost robustness to such attacks significantly.
arXiv Detail & Related papers (2021-01-17T21:15:34Z) - No Need to Know Physics: Resilience of Process-based Model-free Anomaly
Detection for Industrial Control Systems [95.54151664013011]
We present a novel framework to generate adversarial spoofing signals that violate physical properties of the system.
We analyze four anomaly detectors published at top security conferences.
arXiv Detail & Related papers (2020-12-07T11:02:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.