Contestable Black Boxes
- URL: http://arxiv.org/abs/2006.05133v2
- Date: Tue, 30 Jun 2020 14:49:12 GMT
- Title: Contestable Black Boxes
- Authors: Andrea Aler Tubella, Andreas Theodorou, Virginia Dignum, Loizos
Michael
- Abstract summary: This paper investigates the type of assurances that are needed in the contesting process when algorithmic black-boxes are involved.
We argue that specialised complementary methodologies to evaluate automated decision-making in the case of a particular decision being contested need to be developed.
- Score: 10.552465253379134
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The right to contest a decision with consequences on individuals or the
society is a well-established democratic right. Despite this right also being
explicitly included in GDPR in reference to automated decision-making, its
study seems to have received much less attention in the AI literature compared,
for example, to the right for explanation. This paper investigates the type of
assurances that are needed in the contesting process when algorithmic
black-boxes are involved, opening new questions about the interplay of
contestability and explainability. We argue that specialised complementary
methodologies to evaluate automated decision-making in the case of a particular
decision being contested need to be developed. Further, we propose a
combination of well-established software engineering and rule-based approaches
as a possible socio-technical solution to the issue of contestability, one of
the new democratic challenges posed by the automation of decision making.
Related papers
- ABI Approach: Automatic Bias Identification in Decision-Making Under Risk based in an Ontology of Behavioral Economics [46.57327530703435]
Risk seeking preferences for losses, driven by biases such as loss aversion, pose challenges and can result in severe negative consequences.
This research introduces the ABI approach, a novel solution designed to support organizational decision-makers by automatically identifying and explaining risk seeking preferences.
arXiv Detail & Related papers (2024-05-22T23:53:46Z) - Bridging the gap: Towards an Expanded Toolkit for AI-driven Decision-Making in the Public Sector [6.693502127460251]
AI-driven decision-making systems are becoming instrumental in the public sector, with applications spanning areas like criminal justice, social welfare, financial fraud detection, and public health.
These systems face the challenge of aligning machine learning (ML) models with the complex realities of public sector decision-making.
We examine five key challenges where misalignment can occur, including distribution shifts, label bias, the influence of past decision-making on the data side, as well as competing objectives and human-in-the-loop on the model output side.
arXiv Detail & Related papers (2023-10-29T17:44:48Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - The Conflict Between Explainable and Accountable Decision-Making
Algorithms [10.64167691614925]
Decision-making algorithms are being used in important decisions, such as who should be enrolled in health care programs and be hired.
XAI initiative aims to make algorithms explainable to comply with legal requirements, promote trust, and maintain accountability.
This paper questions whether and to what extent explainability can help solve the responsibility issues posed by autonomous AI systems.
arXiv Detail & Related papers (2022-05-11T07:19:28Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - AI & Racial Equity: Understanding Sentiment Analysis Artificial
Intelligence, Data Security, and Systemic Theory in Criminal Justice Systems [0.0]
Various forms of implications of artificial intelligence that either exacerbate or decrease racial systemic injustice have been explored.
It has been asserted through the analysis of historical systemic patterns, implicit biases, existing algorithmic risks, and legal implications that natural language processing based AI, such as risk assessment tools, have racially disparate outcomes.
It is concluded that more litigative policies are needed to regulate and restrict how internal government institutions and corporations utilize algorithms, privacy and security risks, and auditing requirements in order to diverge from racially injustice outcomes and practices of the past.
arXiv Detail & Related papers (2022-01-03T19:42:08Z) - Decision Rule Elicitation for Domain Adaptation [93.02675868486932]
Human-in-the-loop machine learning is widely used in artificial intelligence (AI) to elicit labels from experts.
In this work, we allow experts to additionally produce decision rules describing their decision-making.
We show that decision rule elicitation improves domain adaptation of the algorithm and helps to propagate expert's knowledge to the AI model.
arXiv Detail & Related papers (2021-02-23T08:07:22Z) - Conceptualising Contestability: Perspectives on Contesting Algorithmic
Decisions [18.155121103400333]
We describe and analyse the perspectives of people and organisations who made submissions in response to Australia's proposed AI Ethics Framework'
Our findings reveal that while the nature of contestability is disputed, it is seen as a way to protect individuals, and it resembles contestability in relation to human decision-making.
arXiv Detail & Related papers (2021-02-23T05:13:18Z) - "A cold, technical decision-maker": Can AI provide explainability,
negotiability, and humanity? [47.36687555570123]
We present results of a qualitative study of algorithmic decision-making, comprised of five workshops conducted with a total of 60 participants.
We discuss participants' consideration of humanity in decision-making, and introduce the concept of 'negotiability,' the ability to go beyond formal criteria and work flexibly around the system.
arXiv Detail & Related papers (2020-12-01T22:36:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.