Assessing Risks of Biases in Cognitive Decision Support Systems
- URL: http://arxiv.org/abs/2007.14361v1
- Date: Tue, 28 Jul 2020 16:53:45 GMT
- Title: Assessing Risks of Biases in Cognitive Decision Support Systems
- Authors: Kenneth Lai, Helder C. R. Oliveira, Ming Hou, Svetlana N.
Yanushkevich, and Vlad Shmerko
- Abstract summary: This paper addresses a challenging research question on how to manage an ensemble of biases?
We provide performance projections of the cognitive Decision Support System operational landscape in terms of biases.
We also provide a motivational experiment using face biometric component of the checkpoint system which highlights the discovery of an ensemble of biases.
- Score: 5.480546613836199
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recognizing, assessing, countering, and mitigating the biases of different
nature from heterogeneous sources is a critical problem in designing a
cognitive Decision Support System (DSS). An example of such a system is a
cognitive biometric-enabled security checkpoint. Biased algorithms affect the
decision-making process in an unpredictable way, e.g. face recognition for
different demographic groups may severely impact the risk assessment at a
checkpoint. This paper addresses a challenging research question on how to
manage an ensemble of biases? We provide performance projections of the DSS
operational landscape in terms of biases. A probabilistic reasoning technique
is used for assessment of the risk of such biases. We also provide a
motivational experiment using face biometric component of the checkpoint system
which highlights the discovery of an ensemble of biases and the techniques to
assess their risks.
Related papers
- A Human-Centered Risk Evaluation of Biometric Systems Using Conjoint Analysis [0.6199770411242359]
This paper presents a novel human-centered risk evaluation framework using conjoint analysis to quantify the impact of risk factors, such as surveillance cameras, on attacker's motivation.
Our framework calculates risk values incorporating the False Acceptance Rate (FAR) and attack probability, allowing comprehensive comparisons across use cases.
arXiv Detail & Related papers (2024-09-17T14:18:21Z) - ABI Approach: Automatic Bias Identification in Decision-Making Under Risk based in an Ontology of Behavioral Economics [46.57327530703435]
Risk seeking preferences for losses, driven by biases such as loss aversion, pose challenges and can result in severe negative consequences.
This research introduces the ABI approach, a novel solution designed to support organizational decision-makers by automatically identifying and explaining risk seeking preferences.
arXiv Detail & Related papers (2024-05-22T23:53:46Z) - Auditing Fairness under Unobserved Confounding [56.61738581796362]
We show that we can still give meaningful bounds on treatment rates to high-risk individuals, even when entirely eliminating or relaxing the assumption that all relevant risk factors are observed.
This result is of immediate practical interest: we can audit unfair outcomes of existing decision-making systems in a principled manner.
arXiv Detail & Related papers (2024-03-18T21:09:06Z) - Information-Theoretic Bias Reduction via Causal View of Spurious
Correlation [71.9123886505321]
We propose an information-theoretic bias measurement technique through a causal interpretation of spurious correlation.
We present a novel debiasing framework against the algorithmic bias, which incorporates a bias regularization loss.
The proposed bias measurement and debiasing approaches are validated in diverse realistic scenarios.
arXiv Detail & Related papers (2022-01-10T01:19:31Z) - Anatomizing Bias in Facial Analysis [86.79402670904338]
Existing facial analysis systems have been shown to yield biased results against certain demographic subgroups.
It has become imperative to ensure that these systems do not discriminate based on gender, identity, or skin tone of individuals.
This has led to research in the identification and mitigation of bias in AI systems.
arXiv Detail & Related papers (2021-12-13T09:51:13Z) - Towards Unbiased Visual Emotion Recognition via Causal Intervention [63.74095927462]
We propose a novel Emotion Recognition Network (IERN) to alleviate the negative effects brought by the dataset bias.
A series of designed tests validate the effectiveness of IERN, and experiments on three emotion benchmarks demonstrate that IERN outperforms other state-of-the-art approaches.
arXiv Detail & Related papers (2021-07-26T10:40:59Z) - Feedback Effects in Repeat-Use Criminal Risk Assessments [0.0]
We show that risk can propagate over sequential decisions in ways that are not captured by one-shot tests.
Risk assessment tools operate in a highly complex and path-dependent process, fraught with historical inequity.
arXiv Detail & Related papers (2020-11-28T06:40:05Z) - Reliability of Decision Support in Cross-spectral Biometric-enabled
Systems [2.278720757613755]
This paper addresses the evaluation of the performance of the decision support system that utilizes face and facial expression biometrics.
The relevant applications include human behavior monitoring and stress detection in individuals and teams, and in situational awareness system.
arXiv Detail & Related papers (2020-08-13T07:43:14Z) - Risk, Trust, and Bias: Causal Regulators of Biometric-Enabled Decision
Support [6.32220198667533]
Risk, trust, and bias (R-T-B) are emerging measures of performance of such systems.
This paper offers a complete taxonomy of the R-T-B causal performance regulators for the biometric-enabled DSS.
The proposed novel taxonomy links the R-T-B assessment to the causal inference mechanism for reasoning in decision making.
arXiv Detail & Related papers (2020-08-05T20:49:13Z) - Towards causal benchmarking of bias in face analysis algorithms [54.19499274513654]
We develop an experimental method for measuring algorithmic bias of face analysis algorithms.
Our proposed method is based on generating synthetic transects'' of matched sample images.
We validate our method by comparing it to a study that employs the traditional observational method for analyzing bias in gender classification algorithms.
arXiv Detail & Related papers (2020-07-13T17:10:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.