(Beyond) Reasonable Doubt: Challenges that Public Defenders Face in   Scrutinizing AI in Court
        - URL: http://arxiv.org/abs/2403.13004v1
 - Date: Wed, 13 Mar 2024 23:19:46 GMT
 - Title: (Beyond) Reasonable Doubt: Challenges that Public Defenders Face in   Scrutinizing AI in Court
 - Authors: Angela Jin, Niloufar Salehi, 
 - Abstract summary: We study efforts to contest AI systems in practice by studying how public defenders scrutinize AI in court.
We present findings from interviews with 17 people in the U.S. public defense community.
 - Score: 7.742399489996169
 - License: http://creativecommons.org/licenses/by/4.0/
 - Abstract:   Accountable use of AI systems in high-stakes settings relies on making systems contestable. In this paper we study efforts to contest AI systems in practice by studying how public defenders scrutinize AI in court. We present findings from interviews with 17 people in the U.S. public defense community to understand their perceptions of and experiences scrutinizing computational forensic software (CFS) -- automated decision systems that the government uses to convict and incarcerate, such as facial recognition, gunshot detection, and probabilistic genotyping tools. We find that our participants faced challenges assessing and contesting CFS reliability due to difficulties (a) navigating how CFS is developed and used, (b) overcoming judges and jurors' non-critical perceptions of CFS, and (c) gathering CFS expertise. To conclude, we provide recommendations that center the technical, social, and institutional context to better position interventions such as performance evaluations to support contestability in practice. 
 
       
      
        Related papers
        - The AI Imperative: Scaling High-Quality Peer Review in Machine Learning [49.87236114682497]
We argue that AI-assisted peer review must become an urgent research and infrastructure priority.<n>We propose specific roles for AI in enhancing factual verification, guiding reviewer performance, assisting authors in quality improvement, and supporting ACs in decision-making.
arXiv  Detail & Related papers  (2025-06-09T18:37:14Z) - In-House Evaluation Is Not Enough: Towards Robust Third-Party Flaw   Disclosure for General-Purpose AI [93.33036653316591]
We call for three interventions to advance system safety.
First, we propose using standardized AI flaw reports and rules of engagement for researchers.
Second, we propose GPAI system providers adopt broadly-scoped flaw disclosure programs.
Third, we advocate for the development of improved infrastructure to coordinate distribution of flaw reports.
arXiv  Detail & Related papers  (2025-03-21T05:09:46Z) - From Transparency to Accountability and Back: A Discussion of Access and   Evidence in AI Auditing [1.196505602609637]
Audits can take many forms, including pre-deployment risk assessments, ongoing monitoring, and compliance testing.
There are many operational challenges to AI auditing that complicate its implementation.
We argue that auditing can be cast as a natural hypothesis test, draw parallels hypothesis testing and legal procedure, and argue that this framing provides clear and interpretable guidance on audit implementation.
arXiv  Detail & Related papers  (2024-10-07T06:15:46Z) - Ensuring Fairness with Transparent Auditing of Quantitative Bias in AI   Systems [0.30693357740321775]
AI systems may exhibit biases that lead decision-makers to draw unfair conclusions.
We present a framework for auditing AI fairness involving third-party auditors and AI system providers.
We have created a tool to facilitate systematic examination of AI systems.
arXiv  Detail & Related papers  (2024-08-24T17:16:50Z) - Combining AI Control Systems and Human Decision Support via Robustness   and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv  Detail & Related papers  (2024-07-03T15:38:57Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI   Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv  Detail & Related papers  (2024-03-21T19:12:37Z) - Recommendations for Government Development and Use of Advanced Automated
  Systems to Make Decisions about Individuals [14.957989495850935]
Contestability is often constitutionally required as an element of due process.
We convened a workshop on advanced automated decision making, contestability, and the law.
arXiv  Detail & Related papers  (2024-03-04T00:03:00Z) - Testing autonomous vehicles and AI: perspectives and challenges from   cybersecurity, transparency, robustness and fairness [53.91018508439669]
The study explores the complexities of integrating Artificial Intelligence into Autonomous Vehicles (AVs)
It examines the challenges introduced by AI components and the impact on testing procedures.
The paper identifies significant challenges and suggests future directions for research and development of AI in AV technology.
arXiv  Detail & Related papers  (2024-02-21T08:29:42Z) - Perspectives on Large Language Models for Relevance Judgment [56.935731584323996]
Large language models (LLMs) claim that they can assist with relevance judgments.
It is not clear whether automated judgments can reliably be used in evaluations of retrieval systems.
arXiv  Detail & Related papers  (2023-04-13T13:08:38Z) - Algorithmic Fairness and Vertical Equity: Income Fairness with IRS Tax
  Audit Models [73.24381010980606]
This study examines issues of algorithmic fairness in the context of systems that inform tax audit selection by the IRS.
We show how the use of more flexible machine learning methods for selecting audits may affect vertical equity.
Our results have implications for the design of algorithmic tools across the public sector.
arXiv  Detail & Related papers  (2022-06-20T16:27:06Z) - Adversarial Scrutiny of Evidentiary Statistical Software [32.962815960406196]
U.S. criminal legal system increasingly relies on software output to convict and incarcerate people.
We propose robust adversarial testing as an audit framework to examine the validity of evidentiary statistical software.
arXiv  Detail & Related papers  (2022-06-19T02:08:42Z) - Differential Privacy and Fairness in Decisions and Learning Tasks: A
  Survey [50.90773979394264]
It reviews the conditions under which privacy and fairness may have aligned or contrasting goals.
It analyzes how and why DP may exacerbate bias and unfairness in decision problems and learning tasks.
arXiv  Detail & Related papers  (2022-02-16T16:50:23Z) - Performance in the Courtroom: Automated Processing and Visualization of
  Appeal Court Decisions in France [20.745220428708457]
We use NLP methods to extract interesting entities/data from judgments to construct networks of lawyers and judgments.
We propose metrics to rank lawyers based on their experience, wins/loss ratio and their importance in the network of lawyers.
arXiv  Detail & Related papers  (2020-06-11T08:22:59Z) 
        This list is automatically generated from the titles and abstracts of the papers in this site.
       
     
           This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.