Challenging the Machine: Contestability in Government AI Systems
- URL: http://arxiv.org/abs/2406.10430v1
- Date: Fri, 14 Jun 2024 22:22:17 GMT
- Title: Challenging the Machine: Contestability in Government AI Systems
- Authors: Susan Landau, James X. Dempsey, Ece Kamar, Steven M. Bellovin, Robert Pool,
- Abstract summary: The January 24-25, 2024 workshop was to transform aspirations regarding artificial intelligence into actionable guidance.
The requirements for contestability of advanced decision-making systems are not yet fully defined or implemented.
This document is a report of that workshop, along with recommendations and explanatory material.
- Score: 13.157925465480405
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In an October 2023 executive order (EO), President Biden issued a detailed but largely aspirational road map for the safe and responsible development and use of artificial intelligence (AI). The challenge for the January 24-25, 2024 workshop was to transform those aspirations regarding one specific but crucial issue -- the ability of individuals to challenge government decisions made about themselves -- into actionable guidance enabling agencies to develop, procure, and use genuinely contestable advanced automated decision-making systems. While the Administration has taken important steps since the October 2023 EO, the insights garnered from our workshop remain highly relevant, as the requirements for contestability of advanced decision-making systems are not yet fully defined or implemented. The workshop brought together technologists, members of government agencies and civil society organizations, litigators, and researchers in an intensive two-day meeting that examined the challenges that users, developers, and agencies faced in enabling contestability in light of advanced automated decision-making systems. To ensure a free and open flow of discussion, the meeting was held under a modified version of the Chatham House rule. Participants were free to use any information or details that they learned, but they may not attribute any remarks made at the meeting by the identity or the affiliation of the speaker. Thus, the workshop summary that follows anonymizes speakers and their affiliation. Where an identification of an agency, company, or organization is made, it is done from a public, identified resource and does not necessarily reflect statements made by participants at the workshop. This document is a report of that workshop, along with recommendations and explanatory material.
Related papers
- Do Responsible AI Artifacts Advance Stakeholder Goals? Four Key Barriers Perceived by Legal and Civil Stakeholders [59.17981603969404]
The responsible AI (RAI) community has introduced numerous processes and artifacts to facilitate transparency and support the governance of AI systems.
We conduct semi-structured interviews with 19 government, legal, and civil society stakeholders who inform policy and advocacy around responsible AI efforts.
We organize these beliefs into four barriers that help explain how RAI artifacts may (inadvertently) reconfigure power relations across civil society, government, and industry.
arXiv Detail & Related papers (2024-08-22T00:14:37Z) - Development of Autonomous Artificial Intelligence Systems for Corporate Management [0.0]
The function of a corporate director is still one of the few that are legislated for execution by a "natural" rather than an "artificial" person.
The main prerequisites for development of systems for full automation of management decisions made at the level of a board of directors are formed in the field of corporate law.
There are two main options of management decisions automation at the level of top management and a board of directors: digital command centers or automation of separate functions.
arXiv Detail & Related papers (2024-07-19T08:02:58Z) - Report on the Conference on Ethical and Responsible Design in the National AI Institutes: A Summary of Challenges [0.0]
In May 2023, the Georgia Tech Ethics, Technology, and Human Interaction Center organized the Conference on Ethical and Responsible Design in the National AI Institutes.
The conference focused on three questions: What are the main challenges that the National AI Institutes are facing with regard to the responsible design of AI systems?
This document summarizes the challenges that representatives from the Institutes in attendance highlighted.
arXiv Detail & Related papers (2024-07-18T22:30:08Z) - Recommendations for Government Development and Use of Advanced Automated
Systems to Make Decisions about Individuals [14.957989495850935]
Contestability is often constitutionally required as an element of due process.
We convened a workshop on advanced automated decision making, contestability, and the law.
arXiv Detail & Related papers (2024-03-04T00:03:00Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Report of the 1st Workshop on Generative AI and Law [78.62063815165968]
This report presents the takeaways of the inaugural Workshop on Generative AI and Law (GenLaw)
A cross-disciplinary group of practitioners and scholars from computer science and law convened to discuss the technical, doctrinal, and policy challenges presented by law for Generative AI.
arXiv Detail & Related papers (2023-11-11T04:13:37Z) - ChoiceMates: Supporting Unfamiliar Online Decision-Making with
Multi-Agent Conversational Interactions [58.71970923420007]
We present ChoiceMates, a system that enables conversations with a dynamic set of LLM-powered agents.
Agents, as opinionated personas, flexibly join the conversation, not only providing responses but also conversing among themselves to elicit each agent's preferences.
Our study (n=36) comparing ChoiceMates to conventional web search and single-agent showed that ChoiceMates was more helpful in discovering, diving deeper, and managing information compared to Web with higher confidence.
arXiv Detail & Related papers (2023-10-02T16:49:39Z) - Issues and Challenges in Applications of Artificial Intelligence to
Nuclear Medicine -- The Bethesda Report (AI Summit 2022) [6.810499400672468]
The SNMMI Artificial Intelligence (SNMMI-AI) Summit took place in Bethesda, MD on March 21-22, 2022.
It brought together various community members and stakeholders from academia, healthcare, industry, patient representatives, and government (NIH, FDA)
It considered various key themes to envision and facilitate a bright future for routine, trustworthy use of AI in nuclear medicine.
arXiv Detail & Related papers (2022-11-07T18:57:52Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - Response to Office of the Privacy Commissioner of Canada Consultation
Proposals pertaining to amendments to PIPEDA relative to Artificial
Intelligence [0.0]
The Montreal AI Ethics Institute (MAIEI) was invited by the Office of the Privacy Commissioner of Canada (OPCC) to provide comments.
The present document includes MAIEI comments and recommendations in writing.
arXiv Detail & Related papers (2020-06-12T09:20:04Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.