Citizen Perspectives on Necessary Safeguards to the Use of AI by Law
Enforcement Agencies
- URL: http://arxiv.org/abs/2306.01786v1
- Date: Wed, 31 May 2023 12:26:34 GMT
- Title: Citizen Perspectives on Necessary Safeguards to the Use of AI by Law
Enforcement Agencies
- Authors: Yasmine Ezzeddine, Petra Saskia Bayerl and Helen Gibson
- Abstract summary: This study explores the views of 111 citizens towards AI use by police through interviews.
It integrates societal concerns along with propositions of safeguards from negative effects of AI use by LEAs in the context of cybercrime and terrorism.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the light of modern technological advances, Artificial Intelligence (AI)
is relied upon to enhance performance, increase efficiency, and maximize gains.
For Law Enforcement Agencies (LEAs), it can prove valuable in optimizing
evidence analysis and establishing proactive prevention measures. Nevertheless,
citizens raise legitimate concerns around privacy invasions, biases,
inequalities, and inaccurate decisions. This study explores the views of 111
citizens towards AI use by police through interviews, and integrates societal
concerns along with propositions of safeguards from negative effects of AI use
by LEAs in the context of cybercrime and terrorism.
Related papers
- Strategies to Counter Artificial Intelligence in Law Enforcement: Cross-Country Comparison of Citizens in Greece, Italy and Spain [0.0]
This paper investigates citizens' counter-strategies to the use of Artificial Intelligence by law enforcement agencies (LEAs)
Based on information from three countries (Greece, Italy and Spain) we demonstrate disparities in the likelihood of ten specific counter-strategies.
arXiv Detail & Related papers (2024-05-30T11:55:10Z) - AI Procurement Checklists: Revisiting Implementation in the Age of AI Governance [18.290959557311552]
Public sector use of AI has been on the rise for the past decade, but only recently have efforts to enter it entered the cultural zeitgeist.
While simple to articulate, promoting ethical and effective roll outs of AI systems in government is a notoriously elusive task.
arXiv Detail & Related papers (2024-04-23T01:45:38Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - A Safe Harbor for AI Evaluation and Red Teaming [124.89885800509505]
Some researchers fear that conducting such research or releasing their findings will result in account suspensions or legal reprisal.
We propose that major AI developers commit to providing a legal and technical safe harbor.
We believe these commitments are a necessary step towards more inclusive and unimpeded community efforts to tackle the risks of generative AI.
arXiv Detail & Related papers (2024-03-07T20:55:08Z) - A Case for AI Safety via Law [0.0]
How to make artificial intelligence (AI) systems safe and aligned with human values is an open research question.
Proposed solutions tend toward relying on human intervention in uncertain situations.
This paper makes a case that effective legal systems are the best way to address AI safety.
arXiv Detail & Related papers (2023-07-31T19:55:27Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Ethics and Governance of Artificial Intelligence: Evidence from a Survey
of Machine Learning Researchers [0.0]
Machine learning (ML) and artificial intelligence (AI) researchers play an important role in the ethics and governance of AI.
We conducted a survey of those who published in the top AI/ML conferences.
We find that AI/ML researchers place high levels of trust in international organizations and scientific organizations.
arXiv Detail & Related papers (2021-05-05T15:23:12Z) - AI and Legal Argumentation: Aligning the Autonomous Levels of AI Legal
Reasoning [0.0]
Legal argumentation is a vital cornerstone of justice, underpinning an adversarial form of law.
Extensive research has attempted to augment or undertake legal argumentation via the use of computer-based automation including Artificial Intelligence (AI)
An innovative meta-approach is proposed to apply the Levels of Autonomy (LoA) of AI Legal Reasoning to the maturation of AI and Legal Argumentation (AILA)
arXiv Detail & Related papers (2020-09-11T22:05:40Z) - How Does NLP Benefit Legal System: A Summary of Legal Artificial
Intelligence [81.04070052740596]
Legal Artificial Intelligence (LegalAI) focuses on applying the technology of artificial intelligence, especially natural language processing, to benefit tasks in the legal domain.
This paper introduces the history, the current state, and the future directions of research in LegalAI.
arXiv Detail & Related papers (2020-04-25T14:45:15Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.