Fiduciary Responsibility: Facilitating Public Trust in Automated
Decision Making
- URL: http://arxiv.org/abs/2301.10001v1
- Date: Fri, 6 Jan 2023 18:19:01 GMT
- Title: Fiduciary Responsibility: Facilitating Public Trust in Automated
Decision Making
- Authors: Shannon B. Harper and Eric S. Weber
- Abstract summary: Research and real-world experience indicate that the public lacks trust in automated decision-making systems.
The recreancy theorem argues that the public is more likely to trust and support decisions made or influenced by automated decision-making systems if the institutions that administer them meet their fiduciary responsibility.
This position paper defines and explains the role of fiduciary responsibility within an automated decision-making system.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automated decision-making systems are being increasingly deployed and affect
the public in a multitude of positive and negative ways. Governmental and
private institutions use these systems to process information according to
certain human-devised rules in order to address social problems or
organizational challenges. Both research and real-world experience indicate
that the public lacks trust in automated decision-making systems and the
institutions that deploy them. The recreancy theorem argues that the public is
more likely to trust and support decisions made or influenced by automated
decision-making systems if the institutions that administer them meet their
fiduciary responsibility. However, often the public is never informed of how
these systems operate and resultant institutional decisions are made. A ``black
box'' effect of automated decision-making systems reduces the public's
perceptions of integrity and trustworthiness. The result is that the public
loses the capacity to identify, challenge, and rectify unfairness or the costs
associated with the loss of public goods or benefits.
The current position paper defines and explains the role of fiduciary
responsibility within an automated decision-making system. We formulate an
automated decision-making system as a data science lifecycle (DSL) and examine
the implications of fiduciary responsibility within the context of the DSL.
Fiduciary responsibility within DSLs provides a methodology for addressing the
public's lack of trust in automated decision-making systems and the
institutions that employ them to make decisions affecting the public. We posit
that fiduciary responsibility manifests in several contexts of a DSL, each of
which requires its own mitigation of sources of mistrust. To instantiate
fiduciary responsibility, a Los Angeles Police Department (LAPD) predictive
policing case study is examined.
Related papers
- Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Equal Confusion Fairness: Measuring Group-Based Disparities in Automated
Decision Systems [5.076419064097733]
This paper proposes a new equal confusion fairness test to check an automated decision system for fairness and a new confusion parity error to quantify the extent of any unfairness.
Overall, the methods and metrics provided here may assess automated decision systems' fairness as part of a more extensive accountability assessment.
arXiv Detail & Related papers (2023-07-02T04:44:19Z) - A Trust Framework for Government Use of Artificial Intelligence and
Automated Decision Making [0.1527458325979785]
This paper identifies the challenges of the mechanisation, digitisation and automation of public sector systems and processes.
It proposes a modern and practical framework to ensure and assure ethical and high veracity Artificial Intelligence (AI) or Automated Decision Making (ADM) systems in public institutions.
arXiv Detail & Related papers (2022-08-22T06:51:15Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - Adaptive Autonomy in Human-on-the-Loop Vision-Based Robotics Systems [16.609594839630883]
Computer vision approaches are widely used by autonomous robotic systems to guide their decision making.
High accuracy is critical, particularly for Human-on-the-loop (HoTL) systems where humans play only a supervisory role.
We propose a solution based upon adaptive autonomy levels, whereby the system detects loss of reliability of these models.
arXiv Detail & Related papers (2021-03-28T05:43:10Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z) - Accuracy-Efficiency Trade-Offs and Accountability in Distributed ML
Systems [32.79201607581628]
Trade-offs between accuracy and efficiency pervade law, public health, and other non-computing domains.
We argue that since examining these trade-offs has been useful for guiding governance in other domains, we need to similarly reckon with these trade-offs in governing computer systems.
arXiv Detail & Related papers (2020-07-04T23:00:52Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z) - Modeling Perception Errors towards Robust Decision Making in Autonomous
Vehicles [11.503090828741191]
We propose a simulation-based methodology towards answering the question: is a perception subsystem sufficient for the decision making subsystem to make robust, safe decisions?
We show how to analyze the impact of different kinds of sensing and perception errors on the behavior of the autonomous system.
arXiv Detail & Related papers (2020-01-31T08:02:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.