Stating Comparison Score Uncertainty and Verification Decision
Confidence Towards Transparent Face Recognition
- URL: http://arxiv.org/abs/2210.10354v1
- Date: Wed, 19 Oct 2022 07:43:48 GMT
- Title: Stating Comparison Score Uncertainty and Verification Decision
Confidence Towards Transparent Face Recognition
- Authors: Marco Huber, Philipp Terh\"orst, Florian Kirchbuchner, Naser Damer,
Arjan Kuijper
- Abstract summary: We propose an approach to estimate the uncertainty of face comparison scores.
Second, we introduce a confidence measure of the system's decision to provide insights into the verification decision.
The suitability of the comparison scores uncertainties and the verification decision confidences have been experimentally proven on three face recognition models.
- Score: 13.555831336280407
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Face Recognition (FR) is increasingly used in critical verification decisions
and thus, there is a need for assessing the trustworthiness of such decisions.
The confidence of a decision is often based on the overall performance of the
model or on the image quality. We propose to propagate model uncertainties to
scores and decisions in an effort to increase the transparency of verification
decisions. This work presents two contributions. First, we propose an approach
to estimate the uncertainty of face comparison scores. Second, we introduce a
confidence measure of the system's decision to provide insights into the
verification decision. The suitability of the comparison scores uncertainties
and the verification decision confidences have been experimentally proven on
three face recognition models on two datasets.
Related papers
- Calibration Is Not Enough: Evaluating Confidence Estimation Under Language Variations [49.84786015324238]
Confidence estimation (CE) indicates how reliable the answers of large language models (LLMs) are, and can impact user trust and decision-making.<n>We present a comprehensive evaluation framework for CE that measures their confidence quality on three new aspects.<n>These include robustness of confidence against prompt perturbations, stability across semantic equivalent answers, and sensitivity to semantically different answers.
arXiv Detail & Related papers (2026-01-12T23:16:50Z) - Probabilistic Modeling of Disparity Uncertainty for Robust and Efficient Stereo Matching [61.73532883992135]
We propose a new uncertainty-aware stereo matching framework.
We adopt Bayes risk as the measurement of uncertainty and use it to separately estimate data and model uncertainty.
arXiv Detail & Related papers (2024-12-24T23:28:20Z) - Know Where You're Uncertain When Planning with Multimodal Foundation Models: A Formal Framework [54.40508478482667]
We present a comprehensive framework to disentangle, quantify, and mitigate uncertainty in perception and plan generation.
We propose methods tailored to the unique properties of perception and decision-making.
We show that our uncertainty disentanglement framework reduces variability by up to 40% and enhances task success rates by 5% compared to baselines.
arXiv Detail & Related papers (2024-11-03T17:32:00Z) - (Un)certainty of (Un)fairness: Preference-Based Selection of Certainly Fair Decision-Makers [0.0]
Fairness metrics are used to assess discrimination and bias in decision-making processes across various domains.
We quantify the uncertainty of the disparity to enhance discrimination assessments.
We define preferences over decision-makers and utilize brute-force to choose the optimal decision-maker.
arXiv Detail & Related papers (2024-09-19T11:44:03Z) - Explaining by Imitating: Understanding Decisions by Interpretable Policy
Learning [72.80902932543474]
Understanding human behavior from observed data is critical for transparency and accountability in decision-making.
Consider real-world settings such as healthcare, in which modeling a decision-maker's policy is challenging.
We propose a data-driven representation of decision-making behavior that inheres transparency by design, accommodates partial observability, and operates completely offline.
arXiv Detail & Related papers (2023-10-28T13:06:14Z) - Human-Aligned Calibration for AI-Assisted Decision Making [19.767213234234855]
We show that, if the confidence values satisfy a natural alignment property with respect to the decision maker's confidence on her own predictions, there always exists an optimal decision policy.
We show that multicalibration with respect to the decision maker's confidence on her own predictions is a sufficient condition for alignment.
arXiv Detail & Related papers (2023-05-31T18:00:14Z) - Measuring Classification Decision Certainty and Doubt [61.13511467941388]
We propose intuitive scores, which we call certainty and doubt, to assess and compare the quality and uncertainty of predictions in (multi-)classification decision machine learning problems.
arXiv Detail & Related papers (2023-03-25T21:31:41Z) - Confidence-Calibrated Face and Kinship Verification [8.570969129199467]
We introduce an effective confidence measure that allows verification models to convert a similarity score into a confidence score for any given face pair.
We also propose a confidence-calibrated approach, termed Angular Scaling (ASC), which is easy to implement and can be readily applied to existing verification models.
To the best of our knowledge, our work presents the first comprehensive confidence-calibrated solution for modern face and kinship verification tasks.
arXiv Detail & Related papers (2022-10-25T10:43:46Z) - Stability of Weighted Majority Voting under Estimated Weights [16.804588631149393]
A (machine learning) algorithm that computes trust is called unbiased when it does not systematically overestimate or underestimate the trustworthiness.
We introduce and analyse two important properties of such unbiased trust values: stability of correctness and stability of optimality.
arXiv Detail & Related papers (2022-07-13T10:55:41Z) - Learning Pareto-Efficient Decisions with Confidence [21.915057426589748]
The paper considers the problem of multi-objective decision support when outcomes are uncertain.
This enables quantifying trade-offs between decisions in terms of tail outcomes that are relevant in safety-critical applications.
arXiv Detail & Related papers (2021-10-19T11:32:17Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z) - Inducing Predictive Uncertainty Estimation for Face Recognition [102.58180557181643]
We propose a method for generating image quality training data automatically from'mated-pairs' of face images.
We use the generated data to train a lightweight Predictive Confidence Network, termed as PCNet, for estimating the confidence score of a face image.
arXiv Detail & Related papers (2020-09-01T17:52:00Z) - Inverse Active Sensing: Modeling and Understanding Timely
Decision-Making [111.07204912245841]
We develop a framework for the general setting of evidence-based decision-making under endogenous, context-dependent time pressure.
We demonstrate how it enables modeling intuitive notions of surprise, suspense, and optimality in decision strategies.
arXiv Detail & Related papers (2020-06-25T02:30:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.