A Human-Centered Risk Evaluation of Biometric Systems Using Conjoint Analysis
- URL: http://arxiv.org/abs/2409.11224v1
- Date: Tue, 17 Sep 2024 14:18:21 GMT
- Title: A Human-Centered Risk Evaluation of Biometric Systems Using Conjoint Analysis
- Authors: Tetsushi Ohki, Narishige Abe, Hidetsugu Uchida, Shigefumi Yamada,
- Abstract summary: This paper presents a novel human-centered risk evaluation framework using conjoint analysis to quantify the impact of risk factors, such as surveillance cameras, on attacker's motivation.
Our framework calculates risk values incorporating the False Acceptance Rate (FAR) and attack probability, allowing comprehensive comparisons across use cases.
- Score: 0.6199770411242359
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Biometric recognition systems, known for their convenience, are widely adopted across various fields. However, their security faces risks depending on the authentication algorithm and deployment environment. Current risk assessment methods faces significant challenges in incorporating the crucial factor of attacker's motivation, leading to incomplete evaluations. This paper presents a novel human-centered risk evaluation framework using conjoint analysis to quantify the impact of risk factors, such as surveillance cameras, on attacker's motivation. Our framework calculates risk values incorporating the False Acceptance Rate (FAR) and attack probability, allowing comprehensive comparisons across use cases. A survey of 600 Japanese participants demonstrates our method's effectiveness, showing how security measures influence attacker's motivation. This approach helps decision-makers customize biometric systems to enhance security while maintaining usability.
Related papers
- EAIRiskBench: Towards Evaluating Physical Risk Awareness for Task Planning of Foundation Model-based Embodied AI Agents [47.69642609574771]
Embodied artificial intelligence (EAI) integrates advanced AI models into physical entities for real-world interaction.
Foundation models as the "brain" of EAI agents for high-level task planning have shown promising results.
However, the deployment of these agents in physical environments presents significant safety challenges.
This study introduces EAIRiskBench, a novel framework for automated physical risk assessment in EAI scenarios.
arXiv Detail & Related papers (2024-08-08T13:19:37Z) - Mapping LLM Security Landscapes: A Comprehensive Stakeholder Risk Assessment Proposal [0.0]
We propose a risk assessment process using tools like the risk rating methodology which is used for traditional systems.
We conduct scenario analysis to identify potential threat agents and map the dependent system components against vulnerability factors.
We also map threats against three key stakeholder groups.
arXiv Detail & Related papers (2024-03-20T05:17:22Z) - RiskBench: A Scenario-based Benchmark for Risk Identification [4.263035319815899]
This work focuses on risk identification, the process of identifying and analyzing risks stemming from dynamic traffic participants and unexpected events.
We introduce textbfRiskBench, a large-scale scenario-based benchmark for risk identification.
We assess the ability of ten algorithms to (1) detect and locate risks, (2) anticipate risks, and (3) facilitate decision-making.
arXiv Detail & Related papers (2023-12-04T06:21:22Z) - ASSERT: Automated Safety Scenario Red Teaming for Evaluating the
Robustness of Large Language Models [65.79770974145983]
ASSERT, Automated Safety Scenario Red Teaming, consists of three methods -- semantically aligned augmentation, target bootstrapping, and adversarial knowledge injection.
We partition our prompts into four safety domains for a fine-grained analysis of how the domain affects model performance.
We find statistically significant performance differences of up to 11% in absolute classification accuracy among semantically related scenarios and error rates of up to 19% absolute error in zero-shot adversarial settings.
arXiv Detail & Related papers (2023-10-14T17:10:28Z) - A Counterfactual Safety Margin Perspective on the Scoring of Autonomous
Vehicles' Riskiness [52.27309191283943]
This paper presents a data-driven framework for assessing the risk of different AVs' behaviors.
We propose the notion of counterfactual safety margin, which represents the minimum deviation from nominal behavior that could cause a collision.
arXiv Detail & Related papers (2023-08-02T09:48:08Z) - Foveate, Attribute, and Rationalize: Towards Physically Safe and
Trustworthy AI [76.28956947107372]
Covertly unsafe text is an area of particular interest, as such text may arise from everyday scenarios and are challenging to detect as harmful.
We propose FARM, a novel framework leveraging external knowledge for trustworthy rationale generation in the context of safety.
Our experiments show that FARM obtains state-of-the-art results on the SafeText dataset, showing absolute improvement in safety classification accuracy by 5.9%.
arXiv Detail & Related papers (2022-12-19T17:51:47Z) - A Survey of Risk-Aware Multi-Armed Bandits [84.67376599822569]
We review various risk measures of interest, and comment on their properties.
We consider algorithms for the regret minimization setting, where the exploration-exploitation trade-off manifests.
We conclude by commenting on persisting challenges and fertile areas for future research.
arXiv Detail & Related papers (2022-05-12T02:20:34Z) - Reliability of Decision Support in Cross-spectral Biometric-enabled
Systems [2.278720757613755]
This paper addresses the evaluation of the performance of the decision support system that utilizes face and facial expression biometrics.
The relevant applications include human behavior monitoring and stress detection in individuals and teams, and in situational awareness system.
arXiv Detail & Related papers (2020-08-13T07:43:14Z) - Assessing Risks of Biases in Cognitive Decision Support Systems [5.480546613836199]
This paper addresses a challenging research question on how to manage an ensemble of biases?
We provide performance projections of the cognitive Decision Support System operational landscape in terms of biases.
We also provide a motivational experiment using face biometric component of the checkpoint system which highlights the discovery of an ensemble of biases.
arXiv Detail & Related papers (2020-07-28T16:53:45Z) - Watchlist Risk Assessment using Multiparametric Cost and Relative
Entropy [0.0]
We propose a multiparametric cost assessment and relative entropy measures as risk detectors.
We experimentally demonstrate the effects of mis-identification and impersonation under various watchlist screening scenarios and constraints.
arXiv Detail & Related papers (2020-07-22T10:27:53Z) - SAMBA: Safe Model-Based & Active Reinforcement Learning [59.01424351231993]
SAMBA is a framework for safe reinforcement learning that combines aspects from probabilistic modelling, information theory, and statistics.
We evaluate our algorithm on a variety of safe dynamical system benchmarks involving both low and high-dimensional state representations.
We provide intuition as to the effectiveness of the framework by a detailed analysis of our active metrics and safety constraints.
arXiv Detail & Related papers (2020-06-12T10:40:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.