Challenging the Machine: Contestability in Government AI Systems
- URL: http://arxiv.org/abs/2406.10430v1
- Date: Fri, 14 Jun 2024 22:22:17 GMT
- Title: Challenging the Machine: Contestability in Government AI Systems
- Authors: Susan Landau, James X. Dempsey, Ece Kamar, Steven M. Bellovin, Robert Pool,
- Abstract summary: The January 24-25, 2024 workshop was to transform aspirations regarding artificial intelligence into actionable guidance.
The requirements for contestability of advanced decision-making systems are not yet fully defined or implemented.
This document is a report of that workshop, along with recommendations and explanatory material.
- Score: 13.157925465480405
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In an October 2023 executive order (EO), President Biden issued a detailed but largely aspirational road map for the safe and responsible development and use of artificial intelligence (AI). The challenge for the January 24-25, 2024 workshop was to transform those aspirations regarding one specific but crucial issue -- the ability of individuals to challenge government decisions made about themselves -- into actionable guidance enabling agencies to develop, procure, and use genuinely contestable advanced automated decision-making systems. While the Administration has taken important steps since the October 2023 EO, the insights garnered from our workshop remain highly relevant, as the requirements for contestability of advanced decision-making systems are not yet fully defined or implemented. The workshop brought together technologists, members of government agencies and civil society organizations, litigators, and researchers in an intensive two-day meeting that examined the challenges that users, developers, and agencies faced in enabling contestability in light of advanced automated decision-making systems. To ensure a free and open flow of discussion, the meeting was held under a modified version of the Chatham House rule. Participants were free to use any information or details that they learned, but they may not attribute any remarks made at the meeting by the identity or the affiliation of the speaker. Thus, the workshop summary that follows anonymizes speakers and their affiliation. Where an identification of an agency, company, or organization is made, it is done from a public, identified resource and does not necessarily reflect statements made by participants at the workshop. This document is a report of that workshop, along with recommendations and explanatory material.
Related papers
- Human Decision-making is Susceptible to AI-driven Manipulation [71.20729309185124]
AI systems may exploit users' cognitive biases and emotional vulnerabilities to steer them toward harmful outcomes.
This study examined human susceptibility to such manipulation in financial and emotional decision-making contexts.
arXiv Detail & Related papers (2025-02-11T15:56:22Z) - Autonomy and Safety Assurance in the Early Development of Robotics and Autonomous Systems [0.8999666725996975]
CRADLE aims to make assurance an integral part of engineering reliable, transparent, and trustworthy autonomous systems.
Workshop brought together representatives from six regulatory and assurance bodies across diverse sectors.
Key discussions revolved around three research questions: (i) challenges in assuring safety for AIR; (ii) evidence for safety assurance; and (iii) how assurance cases need to differ for autonomous systems.
arXiv Detail & Related papers (2025-01-30T16:00:26Z) - Future of Information Retrieval Research in the Age of Generative AI [61.56371468069577]
In the fast-evolving field of information retrieval (IR), the integration of generative AI technologies such as large language models (LLMs) is transforming how users search for and interact with information.
Recognizing this paradigm shift, a visioning workshop was held in July 2024 to discuss the future of IR in the age of generative AI.
This report contains a summary of discussions as potentially important research topics and contains a list of recommendations for academics, industry practitioners, institutions, evaluation campaigns, and funding agencies.
arXiv Detail & Related papers (2024-12-03T00:01:48Z) - Do Responsible AI Artifacts Advance Stakeholder Goals? Four Key Barriers Perceived by Legal and Civil Stakeholders [59.17981603969404]
The responsible AI (RAI) community has introduced numerous processes and artifacts to facilitate transparency and support the governance of AI systems.
We conduct semi-structured interviews with 19 government, legal, and civil society stakeholders who inform policy and advocacy around responsible AI efforts.
We organize these beliefs into four barriers that help explain how RAI artifacts may (inadvertently) reconfigure power relations across civil society, government, and industry.
arXiv Detail & Related papers (2024-08-22T00:14:37Z) - Report on the Conference on Ethical and Responsible Design in the National AI Institutes: A Summary of Challenges [0.0]
In May 2023, the Georgia Tech Ethics, Technology, and Human Interaction Center organized the Conference on Ethical and Responsible Design in the National AI Institutes.
The conference focused on three questions: What are the main challenges that the National AI Institutes are facing with regard to the responsible design of AI systems?
This document summarizes the challenges that representatives from the Institutes in attendance highlighted.
arXiv Detail & Related papers (2024-07-18T22:30:08Z) - Recommendations for Government Development and Use of Advanced Automated
Systems to Make Decisions about Individuals [14.957989495850935]
Contestability is often constitutionally required as an element of due process.
We convened a workshop on advanced automated decision making, contestability, and the law.
arXiv Detail & Related papers (2024-03-04T00:03:00Z) - Report of the 1st Workshop on Generative AI and Law [78.62063815165968]
This report presents the takeaways of the inaugural Workshop on Generative AI and Law (GenLaw)
A cross-disciplinary group of practitioners and scholars from computer science and law convened to discuss the technical, doctrinal, and policy challenges presented by law for Generative AI.
arXiv Detail & Related papers (2023-11-11T04:13:37Z) - Issues and Challenges in Applications of Artificial Intelligence to
Nuclear Medicine -- The Bethesda Report (AI Summit 2022) [6.810499400672468]
The SNMMI Artificial Intelligence (SNMMI-AI) Summit took place in Bethesda, MD on March 21-22, 2022.
It brought together various community members and stakeholders from academia, healthcare, industry, patient representatives, and government (NIH, FDA)
It considered various key themes to envision and facilitate a bright future for routine, trustworthy use of AI in nuclear medicine.
arXiv Detail & Related papers (2022-11-07T18:57:52Z) - Inverse Online Learning: Understanding Non-Stationary and Reactionary
Policies [79.60322329952453]
We show how to develop interpretable representations of how agents make decisions.
By understanding the decision-making processes underlying a set of observed trajectories, we cast the policy inference problem as the inverse to this online learning problem.
We introduce a practical algorithm for retrospectively estimating such perceived effects, alongside the process through which agents update them.
Through application to the analysis of UNOS organ donation acceptance decisions, we demonstrate that our approach can bring valuable insights into the factors that govern decision processes and how they change over time.
arXiv Detail & Related papers (2022-03-14T17:40:42Z) - Response to Office of the Privacy Commissioner of Canada Consultation
Proposals pertaining to amendments to PIPEDA relative to Artificial
Intelligence [0.0]
The Montreal AI Ethics Institute (MAIEI) was invited by the Office of the Privacy Commissioner of Canada (OPCC) to provide comments.
The present document includes MAIEI comments and recommendations in writing.
arXiv Detail & Related papers (2020-06-12T09:20:04Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.