Log-Likelihood Score Level Fusion for Improved Cross-Sensor Smartphone
Periocular Recognition
- URL: http://arxiv.org/abs/2311.01237v1
- Date: Thu, 2 Nov 2023 13:43:44 GMT
- Title: Log-Likelihood Score Level Fusion for Improved Cross-Sensor Smartphone
Periocular Recognition
- Authors: Fernando Alonso-Fernandez, Kiran B. Raja, Christoph Busch, Josef Bigun
- Abstract summary: We employ fusion of several comparators to improve periocular performance when images from different smartphones are compared.
We use a probabilistic fusion framework based on linear logistic regression, in which fused scores tend to be log-likelihood ratios.
Our framework also provides an elegant and simple solution to handle signals from different devices, since same-sensor and cross-sensor score distributions are aligned and mapped to a common probabilistic domain.
- Score: 52.15994166413364
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The proliferation of cameras and personal devices results in a wide
variability of imaging conditions, producing large intra-class variations and a
significant performance drop when images from heterogeneous environments are
compared. However, many applications require to deal with data from different
sources regularly, thus needing to overcome these interoperability problems.
Here, we employ fusion of several comparators to improve periocular performance
when images from different smartphones are compared. We use a probabilistic
fusion framework based on linear logistic regression, in which fused scores
tend to be log-likelihood ratios, obtaining a reduction in cross-sensor EER of
up to 40% due to the fusion. Our framework also provides an elegant and simple
solution to handle signals from different devices, since same-sensor and
cross-sensor score distributions are aligned and mapped to a common
probabilistic domain. This allows the use of Bayes thresholds for optimal
decision-making, eliminating the need of sensor-specific thresholds, which is
essential in operational conditions because the threshold setting critically
determines the accuracy of the authentication process in many applications.
Related papers
- DemosaicFormer: Coarse-to-Fine Demosaicing Network for HybridEVS Camera [70.28702677370879]
Hybrid Event-Based Vision Sensor (HybridEVS) is a novel sensor integrating traditional frame-based and event-based sensors.
Despite its potential, the lack of Image signal processing (ISP) pipeline specifically designed for HybridEVS poses a significant challenge.
We propose a coarse-to-fine framework named DemosaicFormer which comprises coarse demosaicing and pixel correction.
arXiv Detail & Related papers (2024-06-12T07:20:46Z) - Sensitivity-Informed Augmentation for Robust Segmentation [21.609070498399863]
Internal noises such as variations in camera quality or lens distortion can affect the performance of segmentation models.
We present an efficient, adaptable, and gradient-free method to enhance the robustness of learning-based segmentation models across training.
arXiv Detail & Related papers (2024-06-03T15:25:45Z) - Efficient Multi-Resolution Fusion for Remote Sensing Data with Label
Uncertainty [0.7832189413179361]
This paper presents a new method for fusing multi-modal and multi-resolution remote sensor data without requiring pixel-level training labels.
We propose a new method based on binary fuzzy measures, which reduces the search space and significantly improves the efficiency of the MIMRF framework.
arXiv Detail & Related papers (2024-02-07T17:34:32Z) - LCPR: A Multi-Scale Attention-Based LiDAR-Camera Fusion Network for
Place Recognition [11.206532393178385]
We present a novel neural network named LCPR for robust multimodal place recognition.
Our method can effectively utilize multi-view camera and LiDAR data to improve the place recognition performance.
arXiv Detail & Related papers (2023-11-06T15:39:48Z) - Breaking Modality Disparity: Harmonized Representation for Infrared and
Visible Image Registration [66.33746403815283]
We propose a scene-adaptive infrared and visible image registration.
We employ homography to simulate the deformation between different planes.
We propose the first ground truth available misaligned infrared and visible image dataset.
arXiv Detail & Related papers (2023-04-12T06:49:56Z) - Quality-Based Conditional Processing in Multi-Biometrics: Application to
Sensor Interoperability [63.05238390013457]
We describe and evaluate the ATVS-UAM fusion approach submitted to the quality-based evaluation of the 2007 BioSecure Multimodal Evaluation Campaign.
Our approach is based on linear logistic regression, in which fused scores tend to be log-likelihood-ratios.
Results show that the proposed approach outperforms all the rule-based fusion schemes.
arXiv Detail & Related papers (2022-11-24T12:11:22Z) - Target-aware Dual Adversarial Learning and a Multi-scenario
Multi-Modality Benchmark to Fuse Infrared and Visible for Object Detection [65.30079184700755]
This study addresses the issue of fusing infrared and visible images that appear differently for object detection.
Previous approaches discover commons underlying the two modalities and fuse upon the common space either by iterative optimization or deep networks.
This paper proposes a bilevel optimization formulation for the joint problem of fusion and detection, and then unrolls to a target-aware Dual Adversarial Learning (TarDAL) network for fusion and a commonly used detection network.
arXiv Detail & Related papers (2022-03-30T11:44:56Z) - Learning Selective Sensor Fusion for States Estimation [47.76590539558037]
We propose SelectFusion, an end-to-end selective sensor fusion module.
During prediction, the network is able to assess the reliability of the latent features from different sensor modalities.
We extensively evaluate all fusion strategies in both public datasets and on progressively degraded datasets.
arXiv Detail & Related papers (2019-12-30T20:25:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.