Vulnerability of Face Recognition Systems Against Composite Face
Reconstruction Attack
- URL: http://arxiv.org/abs/2009.02286v1
- Date: Sun, 23 Aug 2020 03:37:51 GMT
- Title: Vulnerability of Face Recognition Systems Against Composite Face
Reconstruction Attack
- Authors: Hadi Mansourifar, Weidong Shi
- Abstract summary: Rounding confidence score is considered trivial but a simple and effective countermeasure to stop gradient descent based image reconstruction attacks.
In this paper, we prove that, the face reconstruction attacks based on composite faces can reveal the inefficiency of rounding policy as countermeasure.
- Score: 3.3707422585608953
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Rounding confidence score is considered trivial but a simple and effective
countermeasure to stop gradient descent based image reconstruction attacks.
However, its capability in the face of more sophisticated reconstruction
attacks is an uninvestigated research area. In this paper, we prove that, the
face reconstruction attacks based on composite faces can reveal the
inefficiency of rounding policy as countermeasure. We assume that, the attacker
takes advantage of face composite parts which helps the attacker to get access
to the most important features of the face or decompose it to the independent
segments. Afterwards, decomposed segments are exploited as search parameters to
create a search path to reconstruct optimal face. Face composition parts enable
the attacker to violate the privacy of face recognition models even with a
blind search. However, we assume that, the attacker may take advantage of
random search to reconstruct the target face faster. The algorithm is started
with random composition of face parts as initial face and confidence score is
considered as fitness value. Our experiments show that, since the rounding
policy as countermeasure can't stop the random search process, current face
recognition systems are extremely vulnerable against such sophisticated
attacks. To address this problem, we successfully test Face Detection Score
Filtering (FDSF) as a countermeasure to protect the privacy of training data
against proposed attack.
Related papers
- Imperceptible Face Forgery Attack via Adversarial Semantic Mask [59.23247545399068]
We propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility.
Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness.
arXiv Detail & Related papers (2024-06-16T10:38:11Z) - Poisoned Forgery Face: Towards Backdoor Attacks on Face Forgery
Detection [62.595450266262645]
This paper introduces a novel and previously unrecognized threat in face forgery detection scenarios caused by backdoor attack.
By embedding backdoors into models, attackers can deceive detectors into producing erroneous predictions for forged faces.
We propose emphPoisoned Forgery Face framework, which enables clean-label backdoor attacks on face forgery detectors.
arXiv Detail & Related papers (2024-02-18T06:31:05Z) - Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - Detecting Adversarial Faces Using Only Real Face Self-Perturbations [36.26178169550577]
Adrial attacks aim to disturb the functionality of a target system by adding specific noise to the input samples.
Existing defense techniques achieve high accuracy in detecting some specific adversarial faces (adv-faces)
New attack methods especially GAN-based attacks with completely different noise patterns circumvent them and reach a higher attack success rate.
arXiv Detail & Related papers (2023-04-22T09:55:48Z) - RAF: Recursive Adversarial Attacks on Face Recognition Using Extremely
Limited Queries [2.8532545355403123]
Recent successful adversarial attacks on face recognition show that, despite the remarkable progress of face recognition models, they are still far behind the human intelligence for perception and recognition.
In this paper, we propose automatic face warping which needs extremely limited number of queries to fool the target model.
We evaluate the robustness of proposed method in the decision-based black-box attack setting.
arXiv Detail & Related papers (2022-07-04T00:22:45Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - Assessing Privacy Risks from Feature Vector Reconstruction Attacks [24.262351521060676]
We develop metrics that meaningfully capture the threat of reconstructed face images.
We show that reconstructed face images enable re-identification by both commercial facial recognition systems and humans.
Our results confirm that feature vectors should be recognized as Personal Identifiable Information.
arXiv Detail & Related papers (2022-02-11T16:52:02Z) - FaceGuard: A Self-Supervised Defense Against Adversarial Face Images [59.656264895721215]
We propose a new self-supervised adversarial defense framework, namely FaceGuard, that can automatically detect, localize, and purify a wide variety of adversarial faces.
During training, FaceGuard automatically synthesizes challenging and diverse adversarial attacks, enabling a classifier to learn to distinguish them from real faces.
Experimental results on LFW dataset show that FaceGuard can achieve 99.81% detection accuracy on six unseen adversarial attack types.
arXiv Detail & Related papers (2020-11-28T21:18:46Z) - Black-Box Face Recovery from Identity Features [61.950765357647605]
We attack the state-of-the-art face recognition system (ArcFace) to test our algorithm.
Our algorithm requires a significantly less number of queries compared to the state-of-the-art solution.
arXiv Detail & Related papers (2020-07-27T15:25:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.