Towards Evaluating Gaussian Blurring in Perceptual Hashing as a Facial
Image Filter
- URL: http://arxiv.org/abs/2002.00140v2
- Date: Sun, 20 Sep 2020 22:00:06 GMT
- Title: Towards Evaluating Gaussian Blurring in Perceptual Hashing as a Facial
Image Filter
- Authors: Yigit Alparslan, Ken Alparslan, Mannika Kshettry, Louis Kratz
- Abstract summary: Perceptual hashing is often used to detect whether two images are identical.
We propose to experiment with effect of gaussian blurring in perceptual hashing for detecting misuse of personal images specifically for face images.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: With the growth in social media, there is a huge amount of images of faces
available on the internet. Often, people use other people's pictures on their
own profile. Perceptual hashing is often used to detect whether two images are
identical. Therefore, it can be used to detect whether people are misusing
others' pictures. In perceptual hashing, a hash is calculated for a given
image, and a new test image is mapped to one of the existing hashes if
duplicate features are present. Therefore, it can be used as an image filter to
flag banned image content or adversarial attacks --which are modifications that
are made on purpose to deceive the filter-- even though the content might be
changed to deceive the filters. For this reason, it is critical for perceptual
hashing to be robust enough to take transformations such as resizing, cropping,
and slight pixel modifications into account. In this paper, we would like to
propose to experiment with effect of gaussian blurring in perceptual hashing
for detecting misuse of personal images specifically for face images. We
hypothesize that use of gaussian blurring on the image before calculating its
hash will increase the accuracy of our filter that detects adversarial attacks
which consist of image cropping, adding text annotation, and image rotation.
Related papers
- DIAGNOSIS: Detecting Unauthorized Data Usages in Text-to-image Diffusion Models [79.71665540122498]
We propose a method for detecting unauthorized data usage by planting the injected content into the protected dataset.
Specifically, we modify the protected images by adding unique contents on these images using stealthy image warping functions.
By analyzing whether the model has memorized the injected content, we can detect models that had illegally utilized the unauthorized data.
arXiv Detail & Related papers (2023-07-06T16:27:39Z) - Human-imperceptible, Machine-recognizable Images [76.01951148048603]
A major conflict is exposed relating to software engineers between better developing AI systems and distancing from the sensitive training data.
This paper proposes an efficient privacy-preserving learning paradigm, where images are encrypted to become human-imperceptible, machine-recognizable''
We show that the proposed paradigm can ensure the encrypted images have become human-imperceptible while preserving machine-recognizable information.
arXiv Detail & Related papers (2023-06-06T13:41:37Z) - LFW-Beautified: A Dataset of Face Images with Beautification and
Augmented Reality Filters [53.180678723280145]
We contribute with a database of facial images that includes several manipulations.
It includes image enhancement filters (which mostly modify contrast and lightning) and augmented reality filters that incorporate items like animal noses or glasses.
Each dataset contains 4,324 images of size 64 x 64, with a total of 34,592 images.
arXiv Detail & Related papers (2022-03-11T17:05:10Z) - Self-Distilled Hashing for Deep Image Retrieval [25.645550298697938]
In hash-based image retrieval systems, transformed input from the original usually generates different codes.
We propose a novel self-distilled hashing scheme to minimize the discrepancy while exploiting the potential of augmented data.
We also introduce hash proxy-based similarity learning and binary cross entropy-based quantization loss to provide fine quality hash codes.
arXiv Detail & Related papers (2021-12-16T12:01:50Z) - Learning to Break Deep Perceptual Hashing: The Use Case NeuralHash [29.722113621868978]
Apple recently revealed its deep perceptual hashing system NeuralHash to detect child sexual abuse material.
Public criticism arose regarding the protection of user privacy and the system's reliability.
We show that current deep perceptual hashing may not be robust.
arXiv Detail & Related papers (2021-11-12T09:49:27Z) - On the Effect of Selfie Beautification Filters on Face Detection and
Recognition [53.561797148529664]
Social media image filters modify the image contrast or illumination or occlude parts of the face with for example artificial glasses or animal noses.
We develop a method to reconstruct the applied manipulation with a modified version of the U-NET segmentation network.
From a recognition perspective, we employ distance measures and trained machine learning algorithms applied to features extracted using a ResNet-34 network trained to recognize faces.
arXiv Detail & Related papers (2021-10-17T22:10:56Z) - A Study of Face Obfuscation in ImageNet [94.2949777826947]
In this paper, we explore image obfuscation in the ImageNet challenge.
Most categories in the ImageNet challenge are not people categories; nevertheless, many incidental people are in the images.
We benchmark various deep neural networks on face-blurred images and observe a disparate impact on different categories.
Results show that features learned on face-blurred images are equally transferable.
arXiv Detail & Related papers (2021-03-10T17:11:34Z) - Adversarial collision attacks on image hashing functions [9.391375268580806]
We show that it is possible to modify an image to produce an unrelated hash, and an exact hash collision can be produced via minuscule perturbations.
In a white box setting, these collisions can be replicated across nearly every image pair and hash type.
We offer several potential mitigations to gradient-based image hash attacks.
arXiv Detail & Related papers (2020-11-18T18:59:02Z) - A Deeper Look into Hybrid Images [0.0]
First introduction of hybrid images showed that two images can be blend together with a high pass filter and a low pass filter in such a way that when the blended image is viewed from a distance, the high pass filter fades away and the low pass filter becomes prominent.
Our main aim here is to study and review the original paper by changing and tweaking certain parameters to see how they affect the quality of the blended image produced.
arXiv Detail & Related papers (2020-01-30T13:25:14Z) - Recognizing Instagram Filtered Images with Feature De-stylization [81.38905784617089]
This paper presents a study on how popular pretrained models are affected by commonly used Instagram filters.
Our analysis suggests that simple structure preserving filters which only alter the global appearance of an image can lead to large differences in the convolutional feature space.
We introduce a lightweight de-stylization module that predicts parameters used for scaling and shifting feature maps to "undo" the changes incurred by filters.
arXiv Detail & Related papers (2019-12-30T16:48:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.