Analyzing Human Observer Ability in Morphing Attack Detection -- Where
Do We Stand?
- URL: http://arxiv.org/abs/2202.12426v4
- Date: Mon, 5 Sep 2022 09:12:07 GMT
- Title: Analyzing Human Observer Ability in Morphing Attack Detection -- Where
Do We Stand?
- Authors: Sankini Rancha Godage, Fr{\o}y L{\o}v{\aa}sdal, Sushma Venkatesh,
Kiran Raja, Raghavendra Ramachandra, Christoph Busch
- Abstract summary: A prevalent misconception is that an examiner's or observer's capacity for facial morph detection depends on their subject expertise, experience, and familiarity with the issue.
This study builds a new benchmark database of realistic morphing attacks from 48 different subjects, resulting in 400 morphed images.
We also capture images from Automated Border Control (ABC) gates to mimic the realistic border-crossing scenarios in the D-MAD setting with 400 probe images to study the ability of human observers to detect morphed images.
- Score: 11.37940154420898
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Few studies have focused on examining how people recognize morphing attacks,
even as several publications have examined the susceptibility of automated FRS
and offered morphing attack detection (MAD) approaches. MAD approaches base
their decisions either on a single image with no reference to compare against
(S-MAD) or using a reference image (D-MAD). One prevalent misconception is that
an examiner's or observer's capacity for facial morph detection depends on
their subject expertise, experience, and familiarity with the issue and that no
works have reported the specific results of observers who regularly verify
identity (ID) documents for their jobs. As human observers are involved in
checking the ID documents having facial images, a lapse in their competence can
have significant societal challenges. To assess the observers' proficiency,
this work first builds a new benchmark database of realistic morphing attacks
from 48 different subjects, resulting in 400 morphed images. We also capture
images from Automated Border Control (ABC) gates to mimic the realistic
border-crossing scenarios in the D-MAD setting with 400 probe images to study
the ability of human observers to detect morphed images. A new dataset of 180
morphing images is also produced to research human capacity in the S-MAD
environment. In addition to creating a new evaluation platform to conduct S-MAD
and D-MAD analysis, the study employs 469 observers for D-MAD and 410 observers
for S-MAD who are primarily governmental employees from more than 40 countries,
along with 103 subjects who are not examiners. The analysis offers intriguing
insights and highlights the lack of expertise and failure to recognize a
sizable number of morphing attacks by experts. The results of this study are
intended to aid in the development of training programs to prevent security
failures while determining whether an image is bona fide or altered.
Related papers
- ForensicsSAM: Toward Robust and Unified Image Forgery Detection and Localization Resisting to Adversarial Attack [56.0056378072843]
We show that highly transferable adversarial images can be crafted solely via the upstream model.<n>We propose ForensicsSAM, a unified IFDL framework with built-in adversarial robustness.
arXiv Detail & Related papers (2025-08-10T16:03:44Z) - ID-Guard: A Universal Framework for Combating Facial Manipulation via Breaking Identification [60.73617868629575]
misuse of deep learning-based facial manipulation poses a potential threat to civil rights.
To prevent this fraud at its source, proactive defense technology was proposed to disrupt the manipulation process.
We propose a novel universal framework for combating facial manipulation, called ID-Guard.
arXiv Detail & Related papers (2024-09-20T09:30:08Z) - Evaluating Multiview Object Consistency in Humans and Image Models [68.36073530804296]
We leverage an experimental design from the cognitive sciences which requires zero-shot visual inferences about object shape.
We collect 35K trials of behavioral data from over 500 participants.
We then evaluate the performance of common vision models.
arXiv Detail & Related papers (2024-09-09T17:59:13Z) - FakeBench: Probing Explainable Fake Image Detection via Large Multimodal Models [62.66610648697744]
We introduce a taxonomy of generative visual forgery concerning human perception, based on which we collect forgery descriptions in human natural language.
FakeBench examines LMMs with four evaluation criteria: detection, reasoning, interpretation and fine-grained forgery analysis.
This research presents a paradigm shift towards transparency for the fake image detection area.
arXiv Detail & Related papers (2024-04-20T07:28:55Z) - SHIELD : An Evaluation Benchmark for Face Spoofing and Forgery Detection
with Multimodal Large Language Models [63.946809247201905]
We introduce a new benchmark, namely SHIELD, to evaluate the ability of MLLMs on face spoofing and forgery detection.
We design true/false and multiple-choice questions to evaluate multimodal face data in these two face security tasks.
The results indicate that MLLMs hold substantial potential in the face security domain.
arXiv Detail & Related papers (2024-02-06T17:31:36Z) - Seeing is not always believing: Benchmarking Human and Model Perception
of AI-Generated Images [66.20578637253831]
There is a growing concern that the advancement of artificial intelligence (AI) technology may produce fake photos.
This study aims to comprehensively evaluate agents for distinguishing state-of-the-art AI-generated visual content.
arXiv Detail & Related papers (2023-04-25T17:51:59Z) - Multispectral Imaging for Differential Face Morphing Attack Detection: A
Preliminary Study [7.681417534211941]
This paper presents a multispectral framework for differential morphing-attack detection (D-MAD)
The proposed multispectral D-MAD framework introduce a multispectral image captured as a trusted capture to acquire seven different spectral bands to detect morphing attacks.
arXiv Detail & Related papers (2023-04-07T07:03:00Z) - Mask and Restore: Blind Backdoor Defense at Test Time with Masked
Autoencoder [57.739693628523]
We propose a framework for blind backdoor defense with Masked AutoEncoder (BDMAE)
BDMAE detects possible triggers in the token space using image structural similarity and label consistency between the test image and MAE restorations.
Our approach is blind to the model restorations, trigger patterns and image benignity.
arXiv Detail & Related papers (2023-03-27T19:23:33Z) - Testing Human Ability To Detect Deepfake Images of Human Faces [0.0]
In 2020 a workshop consulting AI experts ranked deepfakes as the most serious AI threat.
This study aims to assess human ability to identify image deepfakes of human faces.
arXiv Detail & Related papers (2022-12-07T14:48:25Z) - Face Morphing Attacks and Face Image Quality: The Effect of Morphing and
the Unsupervised Attack Detection by Quality [6.889667606945215]
We theorize that the morphing processes might have an effect on both, the perceptual image quality and the image utility in face recognition.
This work provides an extensive analysis of the effect of morphing on face image quality, including both general image quality measures and face image utility measures.
Our study goes further to build on this effect and investigate the possibility of performing unsupervised morphing attack detection (MAD) based on quality scores.
arXiv Detail & Related papers (2022-08-11T15:12:50Z) - Psychophysical Evaluation of Human Performance in Detecting Digital Face
Image Manipulations [14.63266615325105]
This work introduces a web-based, remote visual discrimination experiment on the basis of principles adopted from the field of psychophysics.
We examine human proficiency in detecting different types of digitally manipulated face images, specifically face swapping, morphing, and retouching.
arXiv Detail & Related papers (2022-01-28T12:45:33Z) - Morphing Attack Detection -- Database, Evaluation Platform and
Benchmarking [16.77282920396874]
Morphing attacks have posed a severe threat to Face Recognition System (FRS)
Despite the number of advancements reported in recent works, we note serious open issues such as independent benchmarking, generalizability challenges and considerations to age, gender, ethnicity that are inadequately addressed.
In this work, we present a new sequestered dataset for facilitating the advancements of MAD.
arXiv Detail & Related papers (2020-06-11T14:11:09Z) - Investigating Bias in Deep Face Analysis: The KANFace Dataset and
Empirical Study [67.3961439193994]
We introduce the most comprehensive, large-scale dataset of facial images and videos to date.
The data are manually annotated in terms of identity, exact age, gender and kinship.
A method to debias network embeddings is introduced and tested on the proposed benchmarks.
arXiv Detail & Related papers (2020-05-15T00:14:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.