Towards User-Centred Design of AI-Assisted Decision-Making in Law Enforcement
- URL: http://arxiv.org/abs/2504.17393v1
- Date: Thu, 24 Apr 2025 09:25:29 GMT
- Title: Towards User-Centred Design of AI-Assisted Decision-Making in Law Enforcement
- Authors: Vesna Nowack, Dalal Alrajeh, Carolina Gutierrez Muñoz, Katie Thomas, William Hobson, Catherine Hamilton-Giachritsis, Patrick Benjamin, Tim Grant, Juliane A. Kloess, Jessica Woodhams,
- Abstract summary: User requirements for AI-assisted systems in law enforcement remain unclear.<n>Participants in our study highlighted the need for a system capable of processing and analysing large volumes of data efficiently.<n>We argue that it is very unlikely that the system will ever achieve full automation due to the dynamic and complex nature of the law enforcement domain.
- Score: 1.1890528509539204
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial Intelligence (AI) has become an important part of our everyday lives, yet user requirements for designing AI-assisted systems in law enforcement remain unclear. To address this gap, we conducted qualitative research on decision-making within a law enforcement agency. Our study aimed to identify limitations of existing practices, explore user requirements and understand the responsibilities that humans expect to undertake in these systems. Participants in our study highlighted the need for a system capable of processing and analysing large volumes of data efficiently to help in crime detection and prevention. Additionally, the system should satisfy requirements for scalability, accuracy, justification, trustworthiness and adaptability to be adopted in this domain. Participants also emphasised the importance of having end users review the input data that might be challenging for AI to interpret, and validate the generated output to ensure the system's accuracy. To keep up with the evolving nature of the law enforcement domain, end users need to help the system adapt to the changes in criminal behaviour and government guidance, and technical experts need to regularly oversee and monitor the system. Furthermore, user-friendly human interaction with the system is essential for its adoption and some of the participants confirmed they would be happy to be in the loop and provide necessary feedback that the system can learn from. Finally, we argue that it is very unlikely that the system will ever achieve full automation due to the dynamic and complex nature of the law enforcement domain.
Related papers
- Compliance of AI Systems [0.0]
This paper systematically examines the compliance of AI systems with relevant legislation, focusing on the EU's AI Act.<n>The analysis highlighted many challenges associated with edge devices, which are increasingly being used to deploy AI applications closer and closer to the data sources.<n>The importance of data set compliance is highlighted as a cornerstone for ensuring the trustworthiness, transparency, and explainability of AI systems.
arXiv Detail & Related papers (2025-03-07T16:53:36Z) - Human-centred test and evaluation of military AI [0.0]
The REAIM 2024 Blueprint for Action states that AI applications in the military domain should be ethical and human-centric.<n>TEVV in the development and deployment of AI systems needs to involve human users throughout the lifecycle.<n>Traditional human-centred test and evaluation methods from human factors need to be adapted for deployed AI systems.
arXiv Detail & Related papers (2024-12-02T21:14:55Z) - A Blueprint for Auditing Generative AI [0.9999629695552196]
generative AI systems display emergent capabilities and are adaptable to a wide range of downstream tasks.
Existing auditing procedures fail to address the governance challenges posed by generative AI systems.
We propose a three-layered approach, whereby governance audits of technology providers that design and disseminate generative AI systems, model audits of generative AI systems after pre-training but prior to their release, and application audits of applications based on top of generative AI systems.
arXiv Detail & Related papers (2024-07-07T11:56:54Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Governing Through the Cloud: The Intermediary Role of Compute Providers in AI Regulation [14.704747149179047]
We argue that compute providers should have legal obligations and ethical responsibilities associated with AI development and deployment.
Compute providers can play an essential role in a regulatory ecosystem via four key capacities.
arXiv Detail & Related papers (2024-03-13T13:08:16Z) - Testing autonomous vehicles and AI: perspectives and challenges from cybersecurity, transparency, robustness and fairness [53.91018508439669]
The study explores the complexities of integrating Artificial Intelligence into Autonomous Vehicles (AVs)
It examines the challenges introduced by AI components and the impact on testing procedures.
The paper identifies significant challenges and suggests future directions for research and development of AI in AV technology.
arXiv Detail & Related papers (2024-02-21T08:29:42Z) - Identifying Roles, Requirements and Responsibilities in Trustworthy AI
Systems [2.28438857884398]
We consider an AI system from the domain practitioner's perspective and identify key roles that are involved in system deployment.
We consider the differing requirements and responsibilities of each role, and identify a tension between transparency and privacy that needs to be addressed.
arXiv Detail & Related papers (2021-06-15T16:05:10Z) - Decision Rule Elicitation for Domain Adaptation [93.02675868486932]
Human-in-the-loop machine learning is widely used in artificial intelligence (AI) to elicit labels from experts.
In this work, we allow experts to additionally produce decision rules describing their decision-making.
We show that decision rule elicitation improves domain adaptation of the algorithm and helps to propagate expert's knowledge to the AI model.
arXiv Detail & Related papers (2021-02-23T08:07:22Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z) - Trustworthy AI Inference Systems: An Industry Research View [58.000323504158054]
We provide an industry research view for approaching the design, deployment, and operation of trustworthy AI inference systems.
We highlight opportunities and challenges in AI systems using trusted execution environments.
We outline areas of further development that require the global collective attention of industry, academia, and government researchers.
arXiv Detail & Related papers (2020-08-10T23:05:55Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.