LitMAS: A Lightweight and Generalized Multi-Modal Anti-Spoofing Framework for Biometric Security
- URL: http://arxiv.org/abs/2506.06759v1
- Date: Sat, 07 Jun 2025 11:04:08 GMT
- Title: LitMAS: A Lightweight and Generalized Multi-Modal Anti-Spoofing Framework for Biometric Security
- Authors: Nidheesh Gorthi, Kartik Thakral, Rishabh Ranjan, Richa Singh, Mayank Vatsa,
- Abstract summary: We propose LitMAS, a framework to detect spoofing attacks in speech, face, iris, and fingerprint-based biometric systems.<n>At the core of LitMAS is a Modality-Aligned Concentration Loss, which enhances inter-class separability.<n>With just 6M parameters, LitMAS surpasses state-of-the-art methods by $1.36%$ in average EER across seven datasets.
- Score: 45.216049137040336
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Biometric authentication systems are increasingly being deployed in critical applications, but they remain susceptible to spoofing. Since most of the research efforts focus on modality-specific anti-spoofing techniques, building a unified, resource-efficient solution across multiple biometric modalities remains a challenge. To address this, we propose LitMAS, a $\textbf{Li}$gh$\textbf{t}$ weight and generalizable $\textbf{M}$ulti-modal $\textbf{A}$nti-$\textbf{S}$poofing framework designed to detect spoofing attacks in speech, face, iris, and fingerprint-based biometric systems. At the core of LitMAS is a Modality-Aligned Concentration Loss, which enhances inter-class separability while preserving cross-modal consistency and enabling robust spoof detection across diverse biometric traits. With just 6M parameters, LitMAS surpasses state-of-the-art methods by $1.36\%$ in average EER across seven datasets, demonstrating high efficiency, strong generalizability, and suitability for edge deployment. Code and trained models are available at https://github.com/IAB-IITJ/LitMAS.
Related papers
- Are You Getting What You Pay For? Auditing Model Substitution in LLM APIs [60.881609323604685]
Large Language Models (LLMs) accessed via black-box APIs introduce a trust challenge.<n>Users pay for services based on advertised model capabilities.<n> providers may covertly substitute the specified model with a cheaper, lower-quality alternative to reduce operational costs.<n>This lack of transparency undermines fairness, erodes trust, and complicates reliable benchmarking.
arXiv Detail & Related papers (2025-04-07T03:57:41Z) - $\textit{Agents Under Siege}$: Breaking Pragmatic Multi-Agent LLM Systems with Optimized Prompt Attacks [32.42704787246349]
Multi-agent Large Language Model (LLM) systems create novel adversarial risks because their behavior depends on communication between agents and decentralized reasoning.<n>In this work, we innovatively focus on attacking pragmatic systems that have constrains such as limited token bandwidth, latency between message delivery, and defense mechanisms.<n>We design a $textitpermutation-invariant adversarial attack$ that optimize prompt distribution across latency and bandwidth-constraint network topologies to bypass distributed safety mechanisms.
arXiv Detail & Related papers (2025-03-31T20:43:56Z) - AMB-FHE: Adaptive Multi-biometric Fusion with Fully Homomorphic Encryption [3.092212810857262]
We present an adaptive multi-biometric fusion with fully homomorphic encryption (AMB-FHE)<n>AMB-FHE is benchmarked against a bimodal biometric database consisting of the CASIA iris and MCYT fingerprint datasets.
arXiv Detail & Related papers (2025-03-31T11:00:08Z) - MOSAIC: Multiple Observers Spotting AI Content, a Robust Approach to Machine-Generated Text Detection [35.67613230687864]
Large Language Models (LLMs) are trained at scale and endowed with powerful text-generating abilities.<n>Various proposals have been made to automatically discriminate artificially generated from human-written texts.<n>We derive a new, theoretically grounded approach to combine their respective strengths.<n>Our experiments, using a variety of generator LLMs, suggest that our method effectively leads to robust detection performances.
arXiv Detail & Related papers (2024-09-11T20:55:12Z) - Impact of Financial Literacy on Investment Decisions and Stock Market Participation using Extreme Learning Machines [0.0]
This study aims to investigate how financial literacy influences financial decision-making and stock market participation.
Our research is qualitative, utilizing data collected from social media platforms to analyze real-time investor behavior and attitudes.
The findings indicate that financial literacy plays a critical role in stock market participation and financial decision-making.
arXiv Detail & Related papers (2024-07-03T20:55:15Z) - Hyperbolic Face Anti-Spoofing [21.981129022417306]
We propose to learn richer hierarchical and discriminative spoofing cues in hyperbolic space.
For unimodal FAS learning, the feature embeddings are projected into the Poincar'e ball, and then the hyperbolic binary logistic regression layer is cascaded for classification.
To alleviate the vanishing gradient problem in hyperbolic space, a new feature clipping method is proposed to enhance the training stability of hyperbolic models.
arXiv Detail & Related papers (2023-08-17T17:18:21Z) - Benchmarking Quality-Dependent and Cost-Sensitive Score-Level Multimodal
Biometric Fusion Algorithms [58.156733807470395]
This paper reports a benchmarking study carried out within the framework of the BioSecure DS2 (Access Control) evaluation campaign.
The campaign targeted the application of physical access control in a medium-size establishment with some 500 persons.
To the best of our knowledge, this is the first attempt to benchmark quality-based multimodal fusion algorithms.
arXiv Detail & Related papers (2021-11-17T13:39:48Z) - Asymmetric Modality Translation For Face Presentation Attack Detection [55.09300842243827]
Face presentation attack detection (PAD) is an essential measure to protect face recognition systems from being spoofed by malicious users.
We propose a novel framework based on asymmetric modality translation forPAD in bi-modality scenarios.
Our method achieves state-of-the-art performance under different evaluation protocols.
arXiv Detail & Related papers (2021-10-18T08:59:09Z) - Aurora Guard: Reliable Face Anti-Spoofing via Mobile Lighting System [103.5604680001633]
Anti-spoofing against high-resolution rendering replay of paper photos or digital videos remains an open problem.
We propose a simple yet effective face anti-spoofing system, termed Aurora Guard (AG)
arXiv Detail & Related papers (2021-02-01T09:17:18Z) - Deep Hashing for Secure Multimodal Biometrics [1.7188280334580195]
We present a framework for feature-level fusion that generates a secure multimodal template from each user's face and iris biometrics.
We employ a hybrid secure architecture by combining cancelable biometrics with secure sketch techniques.
The proposed approach also provides cancelability and unlinkability of the templates along with improved privacy of the biometric data.
arXiv Detail & Related papers (2020-12-29T14:15:05Z) - MixNet for Generalized Face Presentation Attack Detection [63.35297510471997]
We have proposed a deep learning-based network termed as textitMixNet to detect presentation attacks.
The proposed algorithm utilizes state-of-the-art convolutional neural network architectures and learns the feature mapping for each attack category.
arXiv Detail & Related papers (2020-10-25T23:01:13Z) - Face Anti-Spoofing with Human Material Perception [76.4844593082362]
Face anti-spoofing (FAS) plays a vital role in securing the face recognition systems from presentation attacks.
We rephrase face anti-spoofing as a material recognition problem and combine it with classical human material perception.
We propose the Bilateral Convolutional Networks (BCN), which is able to capture intrinsic material-based patterns.
arXiv Detail & Related papers (2020-07-04T18:25:53Z) - Are L2 adversarial examples intrinsically different? [14.77179227968466]
We unravel the properties that can intrinsically differentiate adversarial examples and normal inputs through theoretical analysis.
We achieve a recovered classification accuracy of up to 99% on MNIST, 89% on CIFAR, and 87% on ImageNet subsets against $L$ attacks.
arXiv Detail & Related papers (2020-02-28T03:42:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.