Robust Iris Presentation Attack Detection Fusing 2D and 3D Information
- URL: http://arxiv.org/abs/2002.09137v2
- Date: Wed, 5 Aug 2020 17:38:41 GMT
- Title: Robust Iris Presentation Attack Detection Fusing 2D and 3D Information
- Authors: Zhaoyuan Fang, Adam Czajka, Kevin W. Bowyer
- Abstract summary: This paper proposes a method that combines two-dimensional and three-dimensional properties of the observed iris.
The 2D (textural) iris features are extracted by a state-of-the-art method employing Binary Statistical Image Features (BSIF)
The 3D (shape) iris features are reconstructed by a photometric stereo method from only two images captured under near-infrared illumination placed at two different angles.
- Score: 15.97343723521826
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diversity and unpredictability of artifacts potentially presented to an iris
sensor calls for presentation attack detection methods that are agnostic to
specificity of presentation attack instruments. This paper proposes a method
that combines two-dimensional and three-dimensional properties of the observed
iris to address the problem of spoof detection in case when some properties of
artifacts are unknown. The 2D (textural) iris features are extracted by a
state-of-the-art method employing Binary Statistical Image Features (BSIF) and
an ensemble of classifiers is used to deliver 2D modality-related decision. The
3D (shape) iris features are reconstructed by a photometric stereo method from
only two images captured under near-infrared illumination placed at two
different angles, as in many current commercial iris recognition sensors. The
map of normal vectors is used to assess the convexity of the observed iris
surface. The combination of these two approaches has been applied to detect
whether a subject is wearing a textured contact lens to disguise their
identity. Extensive experiments with NDCLD'15 dataset, and a newly collected
NDIris3D dataset show that the proposed method is highly robust under various
open-set testing scenarios, and that it outperforms all available open-source
iris PAD methods tested in identical scenarios. The source code and the newly
prepared benchmark are made available along with this paper.
Related papers
- RAD: A Comprehensive Dataset for Benchmarking the Robustness of Image Anomaly Detection [4.231702796492545]
This study introduces a Robust Anomaly Detection dataset with free views, uneven illuminations, and blurry collections.
RAD aims to identify foreign objects on working platforms as anomalies.
We assess and analyze 11 state-of-the-art unsupervised and zero-shot methods on RAD.
arXiv Detail & Related papers (2024-06-11T11:39:44Z) - OV-Uni3DETR: Towards Unified Open-Vocabulary 3D Object Detection via Cycle-Modality Propagation [67.56268991234371]
OV-Uni3DETR achieves the state-of-the-art performance on various scenarios, surpassing existing methods by more than 6% on average.
Code and pre-trained models will be released later.
arXiv Detail & Related papers (2024-03-28T17:05:04Z) - Towards Unified 3D Object Detection via Algorithm and Data Unification [70.27631528933482]
We build the first unified multi-modal 3D object detection benchmark MM- Omni3D and extend the aforementioned monocular detector to its multi-modal version.
We name the designed monocular and multi-modal detectors as UniMODE and MM-UniMODE, respectively.
arXiv Detail & Related papers (2024-02-28T18:59:31Z) - Geometric-aware Pretraining for Vision-centric 3D Object Detection [77.7979088689944]
We propose a novel geometric-aware pretraining framework called GAPretrain.
GAPretrain serves as a plug-and-play solution that can be flexibly applied to multiple state-of-the-art detectors.
We achieve 46.2 mAP and 55.5 NDS on the nuScenes val set using the BEVFormer method, with a gain of 2.7 and 2.1 points, respectively.
arXiv Detail & Related papers (2023-04-06T14:33:05Z) - Target-aware Dual Adversarial Learning and a Multi-scenario
Multi-Modality Benchmark to Fuse Infrared and Visible for Object Detection [65.30079184700755]
This study addresses the issue of fusing infrared and visible images that appear differently for object detection.
Previous approaches discover commons underlying the two modalities and fuse upon the common space either by iterative optimization or deep networks.
This paper proposes a bilevel optimization formulation for the joint problem of fusion and detection, and then unrolls to a target-aware Dual Adversarial Learning (TarDAL) network for fusion and a commonly used detection network.
arXiv Detail & Related papers (2022-03-30T11:44:56Z) - M2TR: Multi-modal Multi-scale Transformers for Deepfake Detection [74.19291916812921]
forged images generated by Deepfake techniques pose a serious threat to the trustworthiness of digital information.
In this paper, we aim to capture the subtle manipulation artifacts at different scales for Deepfake detection.
We introduce a high-quality Deepfake dataset, SR-DF, which consists of 4,000 DeepFake videos generated by state-of-the-art face swapping and facial reenactment methods.
arXiv Detail & Related papers (2021-04-20T05:43:44Z) - Viability of Optical Coherence Tomography for Iris Presentation Attack
Detection [13.367903535457364]
OCT imaging provides a cross-sectional view of an eye, whereas traditional imaging provides 2D iris textural information.
We observe promising results demonstrating OCT as a viable solution for iris presentation attack detection.
arXiv Detail & Related papers (2020-10-22T18:00:51Z) - Cross-Modality 3D Object Detection [63.29935886648709]
We present a novel two-stage multi-modal fusion network for 3D object detection.
The whole architecture facilitates two-stage fusion.
Our experiments on the KITTI dataset show that the proposed multi-stage fusion helps the network to learn better representations.
arXiv Detail & Related papers (2020-08-16T11:01:20Z) - Single-Shot 3D Detection of Vehicles from Monocular RGB Images via
Geometry Constrained Keypoints in Real-Time [6.82446891805815]
We propose a novel 3D single-shot object detection method for detecting vehicles in monocular RGB images.
Our approach lifts 2D detections to 3D space by predicting additional regression and classification parameters.
We test our approach on different datasets for autonomous driving and evaluate it using the challenging KITTI 3D Object Detection and the novel nuScenes Object Detection benchmarks.
arXiv Detail & Related papers (2020-06-23T15:10:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.