Developing a Series of AI Challenges for the United States Department of
the Air Force
- URL: http://arxiv.org/abs/2207.07033v1
- Date: Thu, 14 Jul 2022 16:13:40 GMT
- Title: Developing a Series of AI Challenges for the United States Department of
the Air Force
- Authors: Vijay Gadepally, Gregory Angelides, Andrei Barbu, Andrew Bowne, Laura
J. Brattain, Tamara Broderick, Armando Cabrera, Glenn Carl, Ronisha Carter,
Miriam Cha, Emilie Cowen, Jesse Cummings, Bill Freeman, James Glass, Sam
Goldberg, Mark Hamilton, Thomas Heldt, Kuan Wei Huang, Phillip Isola, Boris
Katz, Jamie Koerner, Yen-Chen Lin, David Mayo, Kyle McAlpin, Taylor Perron,
Jean Piou, Hrishikesh M. Rao, Hayley Reynolds, Kaira Samuel, Siddharth Samsi,
Morgan Schmidt, Leslie Shing, Olga Simek, Brandon Swenson, Vivienne Sze,
Jonathan Taylor, Paul Tylkin, Mark Veillette, Matthew L Weiss, Allan
Wollaber, Sophia Yuditskaya, and Jeremy Kepner
- Abstract summary: The DAF-MIT AI Accelerator is an initiative between the DAF and MIT to bridge the gap between AI researchers and DAF mission requirements.
Several projects supported by the DAF-MIT AI Accelerator are developing public challenge problems that address numerous Federal AI research priorities.
These challenges target priorities by making large, AI-ready datasets publicly available, incentivizing open-source solutions, and creating a demand signal for dual use technologies.
- Score: 38.272190683894856
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Through a series of federal initiatives and orders, the U.S. Government has
been making a concerted effort to ensure American leadership in AI. These broad
strategy documents have influenced organizations such as the United States
Department of the Air Force (DAF). The DAF-MIT AI Accelerator is an initiative
between the DAF and MIT to bridge the gap between AI researchers and DAF
mission requirements. Several projects supported by the DAF-MIT AI Accelerator
are developing public challenge problems that address numerous Federal AI
research priorities. These challenges target priorities by making large,
AI-ready datasets publicly available, incentivizing open-source solutions, and
creating a demand signal for dual use technologies that can stimulate further
research. In this article, we describe these public challenges being developed
and how their application contributes to scientific advances.
Related papers
- Advancing AI Challenges for the United States Department of the Air Force [91.02589169578908]
The DAF-MIT AI Accelerator is a collaboration between the United States Department of the Air Force (DAF) and the Massachusetts Institute of Technology (MIT)<n>This article supplements our previous publication, which introduced AI Accelerator challenges.<n>We provide an update on how ongoing and new challenges have successfully contributed to AI research and applications of AI technologies.
arXiv Detail & Related papers (2025-10-31T21:34:57Z) - Domestic frontier AI regulation, an IAEA for AI, an NPT for AI, and a US-led Allied Public-Private Partnership for AI: Four institutions for governing and developing frontier AI [0.0]
I explore four institutions for governing and developing frontier AI.<n>Domestic regimes could be harmonized and monitored through an IAEA for AI.<n>This could be backed up by a Secure Chips Agreement - a Non-Proliferation Treaty (NPT) for AI.<n> Frontier training runs could be carried out by a megaproject between the USA and its allies.
arXiv Detail & Related papers (2025-07-08T20:32:28Z) - The Singapore Consensus on Global AI Safety Research Priorities [128.58674892183657]
"2025 Singapore Conference on AI (SCAI): International Scientific Exchange on AI Safety" aimed to support research in this space.<n>Report builds on the International AI Safety Report chaired by Yoshua Bengio and backed by 33 governments.<n>Report organises AI safety research domains into three types: challenges with creating trustworthy AI systems (Development), challenges with evaluating their risks (Assessment) and challenges with monitoring and intervening after deployment (Control)
arXiv Detail & Related papers (2025-06-25T17:59:50Z) - The California Report on Frontier AI Policy [110.35302787349856]
Continued progress in frontier AI carries the potential for profound advances in scientific discovery, economic productivity, and broader social well-being.<n>As the epicenter of global AI innovation, California has a unique opportunity to continue supporting developments in frontier AI.<n>Report derives policy principles that can inform how California approaches the use, assessment, and governance of frontier AI.
arXiv Detail & Related papers (2025-06-17T23:33:21Z) - One Bad NOFO? AI Governance in Federal Grantmaking [0.2179228399562846]
U.S. agencies have an overlooked AI governance role when directing billions of dollars in federal financial assistance.<n>As discretionary grantmakers, agencies guide and restrict what grant winners do -- a hidden lever for AI governance.<n>We use a novel dataset of over 40,000 non-defense federal grant notices of funding opportunity (NOFOs) posted to the U.S. federal grants website between 2009 and 2024.
arXiv Detail & Related papers (2025-05-13T00:08:22Z) - Responsible Development of Offensive AI [0.0]
This study aims to establish priorities that balance societal benefits against risks.
The two forms of offensive AI evaluated in this study are vulnerability detection agents, which solve Capture- The-Flag challenges, and AI-powered malware.
arXiv Detail & Related papers (2025-04-03T15:37:38Z) - Responsible Artificial Intelligence (RAI) in U.S. Federal Government : Principles, Policies, and Practices [0.0]
Artificial intelligence (AI) and machine learning (ML) have made tremendous advancements in the past decades.
rapid growth of AI/ML and its proliferation in numerous private and public sector applications, while successful, has opened new challenges and obstacles for regulators.
With almost little to no human involvement required for some of the new decision-making AI/ML systems, there is now a pressing need to ensure the responsible use of these systems.
arXiv Detail & Related papers (2025-01-12T16:06:37Z) - Strategic AI Governance: Insights from Leading Nations [0.0]
Artificial Intelligence (AI) has the potential to revolutionize various sectors, yet its adoption is often hindered by concerns about data privacy, security, and the understanding of AI capabilities.
This paper synthesizes AI governance approaches, strategic themes, and enablers and challenges for AI adoption by reviewing national AI strategies from leading nations.
arXiv Detail & Related papers (2024-09-16T06:00:42Z) - Do Responsible AI Artifacts Advance Stakeholder Goals? Four Key Barriers Perceived by Legal and Civil Stakeholders [59.17981603969404]
The responsible AI (RAI) community has introduced numerous processes and artifacts to facilitate transparency and support the governance of AI systems.
We conduct semi-structured interviews with 19 government, legal, and civil society stakeholders who inform policy and advocacy around responsible AI efforts.
We organize these beliefs into four barriers that help explain how RAI artifacts may (inadvertently) reconfigure power relations across civil society, government, and industry.
arXiv Detail & Related papers (2024-08-22T00:14:37Z) - Problem Solving Through Human-AI Preference-Based Cooperation [74.39233146428492]
We propose HAICo2, a novel human-AI co-construction framework.
We take first steps towards a formalization of HAICo2 and discuss the difficult open research problems that it faces.
arXiv Detail & Related papers (2024-08-14T11:06:57Z) - AI Research is not Magic, it has to be Reproducible and Responsible: Challenges in the AI field from the Perspective of its PhD Students [1.1922075410173798]
We surveyed 28 AI doctoral candidates from 13 European countries.
Challenges underscore the findability and quality of AI resources such as datasets, models, and experiments.
There is need for immediate adoption of responsible and reproducible AI research practices.
arXiv Detail & Related papers (2024-08-13T12:19:02Z) - Open Problems in Technical AI Governance [93.89102632003996]
Technical AI governance refers to technical analysis and tools for supporting the effective governance of AI.
This paper is intended as a resource for technical researchers or research funders looking to contribute to AI governance.
arXiv Detail & Related papers (2024-07-20T21:13:56Z) - AI Procurement Checklists: Revisiting Implementation in the Age of AI Governance [18.290959557311552]
Public sector use of AI has been on the rise for the past decade, but only recently have efforts to enter it entered the cultural zeitgeist.
While simple to articulate, promoting ethical and effective roll outs of AI systems in government is a notoriously elusive task.
arXiv Detail & Related papers (2024-04-23T01:45:38Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Advancing Artificial Intelligence and Machine Learning in the U.S.
Government Through Improved Public Competitions [2.741266294612776]
In the last two years, the U.S. government has emphasized the importance of accelerating artificial intelligence (AI) and machine learning (ML)
The U.S. government can benefit from public artificial intelligence and machine learning challenges through the development of novel algorithms and participation in experiential training.
Herein we identify common issues and recommend approaches to increase the effectiveness of challenges.
arXiv Detail & Related papers (2021-11-29T16:35:38Z) - Ethics and Governance of Artificial Intelligence: Evidence from a Survey
of Machine Learning Researchers [0.0]
Machine learning (ML) and artificial intelligence (AI) researchers play an important role in the ethics and governance of AI.
We conducted a survey of those who published in the top AI/ML conferences.
We find that AI/ML researchers place high levels of trust in international organizations and scientific organizations.
arXiv Detail & Related papers (2021-05-05T15:23:12Z) - Learnings from Frontier Development Lab and SpaceML -- AI Accelerators
for NASA and ESA [57.06643156253045]
Research with AI and ML technologies lives in a variety of settings with often asynchronous goals and timelines.
We perform a case study of the Frontier Development Lab (FDL), an AI accelerator under a public-private partnership from NASA and ESA.
FDL research follows principled practices that are grounded in responsible development, conduct, and dissemination of AI research.
arXiv Detail & Related papers (2020-11-09T21:23:03Z) - Artificial Intelligence for UAV-enabled Wireless Networks: A Survey [72.10851256475742]
Unmanned aerial vehicles (UAVs) are considered as one of the promising technologies for the next-generation wireless communication networks.
Artificial intelligence (AI) is growing rapidly nowadays and has been very successful.
We provide a comprehensive overview of some potential applications of AI in UAV-based networks.
arXiv Detail & Related papers (2020-09-24T07:11:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.