Regulating AI-Based Remote Biometric Identification. Investigating the Public Demand for Bans, Audits, and Public Database Registrations
- URL: http://arxiv.org/abs/2401.13605v3
- Date: Mon, 6 May 2024 09:11:37 GMT
- Title: Regulating AI-Based Remote Biometric Identification. Investigating the Public Demand for Bans, Audits, and Public Database Registrations
- Authors: Kimon Kieslich, Marco Lünich,
- Abstract summary: The study focuses on the role of trust in AI as well as trust in law enforcement as potential factors that may lead to demands for regulation of AI technology.
We show that perceptions of discrimination lead to a demand for stronger regulation, while trust in AI and trust in law enforcement lead to opposite effects in terms of demand for a ban on RBI systems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: AI is increasingly being used in the public sector, including public security. In this context, the use of AI-powered remote biometric identification (RBI) systems is a much-discussed technology. RBI systems are used to identify criminal activity in public spaces, but are criticised for inheriting biases and violating fundamental human rights. It is therefore important to ensure that such systems are developed in the public interest, which means that any technology that is deployed for public use needs to be scrutinised. While there is a consensus among business leaders, policymakers and scientists that AI must be developed in an ethical and trustworthy manner, scholars have argued that ethical guidelines do not guarantee ethical AI, but rather prevent stronger regulation of AI. As a possible counterweight, public opinion can have a decisive influence on policymakers to establish boundaries and conditions under which AI systems should be used -- if at all. However, we know little about the conditions that lead to regulatory demand for AI systems. In this study, we focus on the role of trust in AI as well as trust in law enforcement as potential factors that may lead to demands for regulation of AI technology. In addition, we explore the mediating effects of discrimination perceptions regarding RBI. We test the effects on four different use cases of RBI varying the temporal aspect (real-time vs. post hoc analysis) and purpose of use (persecution of criminals vs. safeguarding public events) in a survey among German citizens. We found that German citizens do not differentiate between the different modes of application in terms of their demand for RBI regulation. Furthermore, we show that perceptions of discrimination lead to a demand for stronger regulation, while trust in AI and trust in law enforcement lead to opposite effects in terms of demand for a ban on RBI systems.
Related papers
- It's complicated. The relationship of algorithmic fairness and non-discrimination regulations in the EU AI Act [2.9914612342004503]
The EU has recently passed the AI Act, which mandates specific rules for AI models.
This paper introduces both legal non-discrimination regulations and machine learning based algorithmic fairness concepts.
arXiv Detail & Related papers (2025-01-22T15:38:09Z) - Responsible Artificial Intelligence (RAI) in U.S. Federal Government : Principles, Policies, and Practices [0.0]
Artificial intelligence (AI) and machine learning (ML) have made tremendous advancements in the past decades.
rapid growth of AI/ML and its proliferation in numerous private and public sector applications, while successful, has opened new challenges and obstacles for regulators.
With almost little to no human involvement required for some of the new decision-making AI/ML systems, there is now a pressing need to ensure the responsible use of these systems.
arXiv Detail & Related papers (2025-01-12T16:06:37Z) - Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Ensuring Fairness with Transparent Auditing of Quantitative Bias in AI Systems [0.30693357740321775]
AI systems may exhibit biases that lead decision-makers to draw unfair conclusions.
We present a framework for auditing AI fairness involving third-party auditors and AI system providers.
We have created a tool to facilitate systematic examination of AI systems.
arXiv Detail & Related papers (2024-08-24T17:16:50Z) - Human Oversight of Artificial Intelligence and Technical Standardisation [0.0]
Within the global governance of AI, the requirement for human oversight is embodied in several regulatory formats.
The EU legislator is therefore going much further than in the past in "spelling out" the legal requirement for human oversight.
The question of the place of humans in the AI decision-making process should be given particular attention.
arXiv Detail & Related papers (2024-07-02T07:43:46Z) - AI Procurement Checklists: Revisiting Implementation in the Age of AI Governance [18.290959557311552]
Public sector use of AI has been on the rise for the past decade, but only recently have efforts to enter it entered the cultural zeitgeist.
While simple to articulate, promoting ethical and effective roll outs of AI systems in government is a notoriously elusive task.
arXiv Detail & Related papers (2024-04-23T01:45:38Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - A Safe Harbor for AI Evaluation and Red Teaming [124.89885800509505]
Some researchers fear that conducting such research or releasing their findings will result in account suspensions or legal reprisal.
We propose that major AI developers commit to providing a legal and technical safe harbor.
We believe these commitments are a necessary step towards more inclusive and unimpeded community efforts to tackle the risks of generative AI.
arXiv Detail & Related papers (2024-03-07T20:55:08Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.