"Even explanations will not help in trusting [this] fundamentally biased system": A Predictive Policing Case-Study
- URL: http://arxiv.org/abs/2504.11020v1
- Date: Tue, 15 Apr 2025 09:43:48 GMT
- Title: "Even explanations will not help in trusting [this] fundamentally biased system": A Predictive Policing Case-Study
- Authors: Siddharth Mehrotra, Ujwal Gadiraju, Eva Bittner, Folkert van Delden, Catholijn M. Jonker, Myrthe L. Tielman,
- Abstract summary: The use of AI systems in high-risk domains have often led users to either under-trust it, potentially causing inadequate reliance or over-trust it, resulting in over-compliance.<n>Past research has indicated that explanations provided by AI systems can enhance user understanding of when to trust or not trust the system.<n>This study explores the impact of different explanation types and user expertise on establishing appropriate trust in AI-based predictive policing.
- Score: 8.240854389254222
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: In today's society, where Artificial Intelligence (AI) has gained a vital role, concerns regarding user's trust have garnered significant attention. The use of AI systems in high-risk domains have often led users to either under-trust it, potentially causing inadequate reliance or over-trust it, resulting in over-compliance. Therefore, users must maintain an appropriate level of trust. Past research has indicated that explanations provided by AI systems can enhance user understanding of when to trust or not trust the system. However, the utility of presentation of different explanations forms still remains to be explored especially in high-risk domains. Therefore, this study explores the impact of different explanation types (text, visual, and hybrid) and user expertise (retired police officers and lay users) on establishing appropriate trust in AI-based predictive policing. While we observed that the hybrid form of explanations increased the subjective trust in AI for expert users, it did not led to better decision-making. Furthermore, no form of explanations helped build appropriate trust. The findings of our study emphasize the importance of re-evaluating the use of explanations to build [appropriate] trust in AI based systems especially when the system's use is questionable. Finally, we synthesize potential challenges and policy recommendations based on our results to design for appropriate trust in high-risk based AI-based systems.
Related papers
- Is Trust Correlated With Explainability in AI? A Meta-Analysis [0.0]
We conduct a comprehensive examination of the existing literature to explore the relationship between AI explainability and trust.
Our analysis, incorporating data from 90 studies, reveals a statistically significant but moderate positive correlation between the explainability of AI systems and the trust they engender.
This research highlights its broader socio-technical implications, particularly in promoting accountability and fostering user trust in critical domains such as healthcare and justice.
arXiv Detail & Related papers (2025-04-16T23:30:55Z) - Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - A Diachronic Perspective on User Trust in AI under Uncertainty [52.44939679369428]
Modern NLP systems are often uncalibrated, resulting in confidently incorrect predictions that undermine user trust.
We study the evolution of user trust in response to trust-eroding events using a betting game.
arXiv Detail & Related papers (2023-10-20T14:41:46Z) - Who to Trust, How and Why: Untangling AI Ethics Principles,
Trustworthiness and Trust [0.0]
We argue for the need to distinguish these concepts more clearly.
We discuss that trust in AI involves not only reliance on the system itself, but also trust in the developers of the AI system.
arXiv Detail & Related papers (2023-09-19T05:00:34Z) - A Systematic Literature Review of User Trust in AI-Enabled Systems: An
HCI Perspective [0.0]
User trust in Artificial Intelligence (AI) enabled systems has been increasingly recognized and proven as a key element to fostering adoption.
This review aims to provide an overview of the user trust definitions, influencing factors, and measurement methods from 23 empirical studies.
arXiv Detail & Related papers (2023-04-18T07:58:09Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Knowledge-intensive Language Understanding for Explainable AI [9.541228711585886]
How AI-led decisions are made and what determining factors were included are crucial to understand.
It is critical to have human-centered explanations that are directly related to decision-making.
It is necessary to involve explicit domain knowledge that humans understand and use.
arXiv Detail & Related papers (2021-08-02T21:12:30Z) - Explainable Artificial Intelligence (XAI) for Increasing User Trust in
Deep Reinforcement Learning Driven Autonomous Systems [0.8701566919381223]
We offer an explainable artificial intelligence (XAI) framework that provides a three-fold explanation.
We created a user-interface for our XAI framework and evaluated its efficacy via a human-user experiment.
arXiv Detail & Related papers (2021-06-07T16:38:43Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z) - Where Does Trust Break Down? A Quantitative Trust Analysis of Deep
Neural Networks via Trust Matrix and Conditional Trust Densities [94.65749466106664]
We introduce the concept of trust matrix, a novel trust quantification strategy.
A trust matrix defines the expected question-answer trust for a given actor-oracle answer scenario.
We further extend the concept of trust densities with the notion of conditional trust densities.
arXiv Detail & Related papers (2020-09-30T14:33:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.