Advancing Artificial Intelligence and Machine Learning in the U.S.
Government Through Improved Public Competitions
- URL: http://arxiv.org/abs/2112.01275v1
- Date: Mon, 29 Nov 2021 16:35:38 GMT
- Title: Advancing Artificial Intelligence and Machine Learning in the U.S.
Government Through Improved Public Competitions
- Authors: Ezekiel J. Maier
- Abstract summary: In the last two years, the U.S. government has emphasized the importance of accelerating artificial intelligence (AI) and machine learning (ML)
The U.S. government can benefit from public artificial intelligence and machine learning challenges through the development of novel algorithms and participation in experiential training.
Herein we identify common issues and recommend approaches to increase the effectiveness of challenges.
- Score: 2.741266294612776
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the last two years, the U.S. government has emphasized the importance of
accelerating artificial intelligence (AI) and machine learning (ML) within the
government and across the nation. In particular, the National Artificial
Intelligence Initiative Act of 2020, which became law on January 1, 2021,
provides for a coordinated program across the entire federal government to
accelerate AI research and application. The U.S. government can benefit from
public artificial intelligence and machine learning challenges through the
development of novel algorithms and participation in experiential training.
Although the public, private, and non-profit sectors have a history of
leveraging crowdsourcing initiatives to generate novel solutions to difficult
problems and engage stakeholders, interest in public competitions has waned in
recent years as a result of at least three major factors: (1) a lack of
high-quality, high-impact data; (2) a narrow engagement focus on specialized
groups; and (3) insufficient operationalization of challenge results. Herein we
identify common issues and recommend approaches to increase the effectiveness
of challenges. To address these barriers, enabling the use of public
competitions for accelerating AI and ML practice, the U.S. government must
leverage methods that protect sensitive data while enabling modelling, enable
easier participation, empower deployment of validated models, and incentivize
engagement from broad sections of the population.
Related papers
- AI Procurement Checklists: Revisiting Implementation in the Age of AI Governance [18.290959557311552]
Public sector use of AI has been on the rise for the past decade, but only recently have efforts to enter it entered the cultural zeitgeist.
While simple to articulate, promoting ethical and effective roll outs of AI systems in government is a notoriously elusive task.
arXiv Detail & Related papers (2024-04-23T01:45:38Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
Particip-AI is a framework to gather current and future AI use cases and their harms and benefits from non-expert public.
We gather responses from 295 demographically diverse participants.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Computing Power and the Governance of Artificial Intelligence [51.967584623262674]
Governments and companies have started to leverage compute as a means to govern AI.
compute-based policies and technologies have the potential to assist in these areas, but there is significant variation in their readiness for implementation.
naive or poorly scoped approaches to compute governance carry significant risks in areas like privacy, economic impacts, and centralization of power.
arXiv Detail & Related papers (2024-02-13T21:10:21Z) - Ten Hard Problems in Artificial Intelligence We Must Get Right [72.99597122935903]
We explore the AI2050 "hard problems" that block the promise of AI and cause AI risks.
For each problem, we outline the area, identify significant recent work, and suggest ways forward.
arXiv Detail & Related papers (2024-02-06T23:16:41Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - FATE in AI: Towards Algorithmic Inclusivity and Accessibility [0.0]
To prevent algorithmic disparities, fairness, accountability, transparency, and ethics (FATE) in AI are being implemented.
This study examines FATE-related desiderata, particularly transparency and ethics, in areas of the global South that are underserved by AI.
To promote inclusivity, a community-led strategy is proposed to collect and curate representative data for responsible AI design.
arXiv Detail & Related papers (2023-01-03T15:08:10Z) - Proceedings of the Artificial Intelligence for Cyber Security (AICS)
Workshop at AAAI 2022 [55.573187938617636]
The workshop will focus on the application of AI to problems in cyber security.
Cyber systems generate large volumes of data, utilizing this effectively is beyond human capabilities.
arXiv Detail & Related papers (2022-02-28T18:27:41Z) - Empowering Local Communities Using Artificial Intelligence [70.17085406202368]
It has become an important topic to explore the impact of AI on society from a people-centered perspective.
Previous works in citizen science have identified methods of using AI to engage the public in research.
This article discusses the challenges of applying AI in Community Citizen Science.
arXiv Detail & Related papers (2021-10-05T12:51:11Z) - Ethics and Governance of Artificial Intelligence: Evidence from a Survey
of Machine Learning Researchers [0.0]
Machine learning (ML) and artificial intelligence (AI) researchers play an important role in the ethics and governance of AI.
We conducted a survey of those who published in the top AI/ML conferences.
We find that AI/ML researchers place high levels of trust in international organizations and scientific organizations.
arXiv Detail & Related papers (2021-05-05T15:23:12Z) - Bias in Data-driven AI Systems -- An Introductory Survey [37.34717604783343]
This survey focuses on data-driven AI, as a large part of AI is powered nowadays by (big) data and powerful Machine Learning (ML) algorithms.
If otherwise not specified, we use the general term bias to describe problems related to the gathering or processing of data that might result in prejudiced decisions on the bases of demographic features like race, sex, etc.
arXiv Detail & Related papers (2020-01-14T09:39:09Z) - U.S. Public Opinion on the Governance of Artificial Intelligence [0.0]
Existing studies find that the public's trust in institutions can play a major role in shaping the regulation of emerging technologies.
We examined Americans' perceptions of 13 AI governance challenges and their trust in governmental, corporate, and multistakeholder institutions to responsibly develop and manage AI.
arXiv Detail & Related papers (2019-12-30T07:38:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.