Analysing and Organising Human Communications for AI Fairness-Related Decisions: Use Cases from the Public Sector
- URL: http://arxiv.org/abs/2404.00022v1
- Date: Wed, 20 Mar 2024 14:20:42 GMT
- Title: Analysing and Organising Human Communications for AI Fairness-Related Decisions: Use Cases from the Public Sector
- Authors: Mirthe Dankloff, Vanja Skoric, Giovanni Sileno, Sennay Ghebreab, Jacco Van Ossenbruggen, Emma Beauxis-Aussalet,
- Abstract summary: Communication issues between diverse stakeholders can lead to misinterpretation and misuse of AI algorithms.
We conduct interviews with practitioners working on algorithmic systems in the public sector.
We identify key elements of communication processes that underlie fairness-related human decisions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: AI algorithms used in the public sector, e.g., for allocating social benefits or predicting fraud, often involve multiple public and private stakeholders at various phases of the algorithm's life-cycle. Communication issues between these diverse stakeholders can lead to misinterpretation and misuse of algorithms. We investigate the communication processes for AI fairness-related decisions by conducting interviews with practitioners working on algorithmic systems in the public sector. By applying qualitative coding analysis, we identify key elements of communication processes that underlie fairness-related human decisions. We analyze the division of roles, tasks, skills, and challenges perceived by stakeholders. We formalize the underlying communication issues within a conceptual framework that i. represents the communication patterns ii. outlines missing elements, such as actors who miss skills for their tasks. The framework is used for describing and analyzing key organizational issues for fairness-related decisions. Three general patterns emerge from the analysis: 1. Policy-makers, civil servants, and domain experts are less involved compared to developers throughout a system's life-cycle. This leads to developers taking on extra roles such as advisor, while they potentially miss the required skills and guidance from domain experts. 2. End-users and policy-makers often lack the technical skills to interpret a system's limitations, and rely on developer roles for making decisions concerning fairness issues. 3. Citizens are structurally absent throughout a system's life-cycle, which may lead to decisions that do not include relevant considerations from impacted stakeholders.
Related papers
- Assistive AI for Augmenting Human Decision-making [3.379906135388703]
The paper shows how AI can assist in the complex process of decision-making while maintaining human oversight.
Central to our framework are the principles of privacy, accountability, and credibility.
arXiv Detail & Related papers (2024-10-18T10:16:07Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Online Decision Mediation [72.80902932543474]
Consider learning a decision support assistant to serve as an intermediary between (oracle) expert behavior and (imperfect) human behavior.
In clinical diagnosis, fully-autonomous machine behavior is often beyond ethical affordances.
arXiv Detail & Related papers (2023-10-28T05:59:43Z) - Stronger Together: on the Articulation of Ethical Charters, Legal Tools,
and Technical Documentation in ML [5.433040083728602]
The need for accountability of the people behind AI systems can be addressed by leveraging processes in three fields of study: ethics, law, and computer science.
We first contrast notions of compliance in the ethical, legal, and technical fields.
We then focus on the role of values in articulating the synergies between the fields.
arXiv Detail & Related papers (2023-05-09T15:35:31Z) - ACROCPoLis: A Descriptive Framework for Making Sense of Fairness [6.4686347616068005]
We propose the ACROCPoLis framework to represent allocation processes with a modeling emphasis on fairness aspects.
The framework provides a shared vocabulary in which the factors relevant to fairness assessments for different situations and procedures are made explicit.
arXiv Detail & Related papers (2023-04-19T21:14:57Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Wer ist schuld, wenn Algorithmen irren? Entscheidungsautomatisierung,
Organisationen und Verantwortung [0.0]
Algorithmic decision support (ADS) is increasingly used in a whole array of different contexts and structures.
Our article aims to give a brief overview of the central issues connected to ADS, responsibility and decision-making in organisational contexts.
We describe a set of guidelines and a complementary digital tool to assist practitioners in mapping responsibility when introducing ADS.
arXiv Detail & Related papers (2022-07-21T13:45:10Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - Decision Rule Elicitation for Domain Adaptation [93.02675868486932]
Human-in-the-loop machine learning is widely used in artificial intelligence (AI) to elicit labels from experts.
In this work, we allow experts to additionally produce decision rules describing their decision-making.
We show that decision rule elicitation improves domain adaptation of the algorithm and helps to propagate expert's knowledge to the AI model.
arXiv Detail & Related papers (2021-02-23T08:07:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.