Artificial intelligence in government: Concepts, standards, and a
unified framework
- URL: http://arxiv.org/abs/2210.17218v2
- Date: Wed, 25 Oct 2023 18:35:20 GMT
- Title: Artificial intelligence in government: Concepts, standards, and a
unified framework
- Authors: Vincent J. Straub, Deborah Morgan, Jonathan Bright and Helen Margetts
- Abstract summary: Recent advances in artificial intelligence (AI) hold the promise of transforming government.
It is critical that new AI systems behave in alignment with the normative expectations of society.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances in artificial intelligence (AI), especially in generative
language modelling, hold the promise of transforming government. Given the
advanced capabilities of new AI systems, it is critical that these are embedded
using standard operational procedures, clear epistemic criteria, and behave in
alignment with the normative expectations of society. Scholars in multiple
domains have subsequently begun to conceptualize the different forms that AI
applications may take, highlighting both their potential benefits and pitfalls.
However, the literature remains fragmented, with researchers in social science
disciplines like public administration and political science, and the
fast-moving fields of AI, ML, and robotics, all developing concepts in relative
isolation. Although there are calls to formalize the emerging study of AI in
government, a balanced account that captures the full depth of theoretical
perspectives needed to understand the consequences of embedding AI into a
public sector context is lacking. Here, we unify efforts across social and
technical disciplines by first conducting an integrative literature review to
identify and cluster 69 key terms that frequently co-occur in the
multidisciplinary study of AI. We then build on the results of this
bibliometric analysis to propose three new multifaceted concepts for
understanding and analysing AI-based systems for government (AI-GOV) in a more
unified way: (1) operational fitness, (2) epistemic alignment, and (3)
normative divergence. Finally, we put these concepts to work by using them as
dimensions in a conceptual typology of AI-GOV and connecting each with emerging
AI technical measurement standards to encourage operationalization, foster
cross-disciplinary dialogue, and stimulate debate among those aiming to rethink
government with AI.
Related papers
- AI Thinking: A framework for rethinking artificial intelligence in practice [2.9805831933488127]
A growing range of disciplines are now involved in studying, developing, and assessing the use of AI in practice.
New, interdisciplinary approaches are needed to bridge competing conceptualisations of AI in practice.
I propose a novel conceptual framework called AI Thinking, which models key decisions and considerations involved in AI use across disciplinary perspectives.
arXiv Detail & Related papers (2024-08-26T04:41:21Z) - Towards Bidirectional Human-AI Alignment: A Systematic Review for Clarifications, Framework, and Future Directions [101.67121669727354]
Recent advancements in AI have highlighted the importance of guiding AI systems towards the intended goals, ethical principles, and values of individuals and groups, a concept broadly recognized as alignment.
The lack of clarified definitions and scopes of human-AI alignment poses a significant obstacle, hampering collaborative efforts across research domains to achieve this alignment.
We introduce a systematic review of over 400 papers published between 2019 and January 2024, spanning multiple domains such as Human-Computer Interaction (HCI), Natural Language Processing (NLP), Machine Learning (ML)
arXiv Detail & Related papers (2024-06-13T16:03:25Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Responsible Artificial Intelligence: A Structured Literature Review [0.0]
The EU has recently issued several publications emphasizing the necessity of trust in AI.
This highlights the urgent need for international regulation.
This paper introduces a comprehensive and, to our knowledge, the first unified definition of responsible AI.
arXiv Detail & Related papers (2024-03-11T17:01:13Z) - POLARIS: A framework to guide the development of Trustworthy AI systems [3.02243271391691]
There is a significant gap between high-level AI ethics principles and low-level concrete practices for AI professionals.
We develop a novel holistic framework for Trustworthy AI - designed to bridge the gap between theory and practice.
Our goal is to empower AI professionals to confidently navigate the ethical dimensions of Trustworthy AI.
arXiv Detail & Related papers (2024-02-08T01:05:16Z) - AI for social science and social science of AI: A Survey [47.5235291525383]
Recent advancements in artificial intelligence have sparked a rethinking of artificial general intelligence possibilities.
The increasing human-like capabilities of AI are also attracting attention in social science research.
arXiv Detail & Related papers (2024-01-22T10:57:09Z) - Is AI Changing the Rules of Academic Misconduct? An In-depth Look at
Students' Perceptions of 'AI-giarism' [0.0]
This study explores students' perceptions of AI-giarism, an emergent form of academic dishonesty involving AI and plagiarism.
The findings portray a complex landscape of understanding, with clear disapproval for direct AI content generation.
The study provides pivotal insights for academia, policy-making, and the broader integration of AI technology in education.
arXiv Detail & Related papers (2023-06-06T02:22:08Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.