Real-World Gaps in AI Governance Research
- URL: http://arxiv.org/abs/2505.00174v1
- Date: Wed, 30 Apr 2025 20:44:42 GMT
- Title: Real-World Gaps in AI Governance Research
- Authors: Ilan Strauss, Isobel Moure, Tim O'Reilly, Sruly Rosenblat,
- Abstract summary: Drawing on 1,178 safety and reliability papers from 9,439 generative AI papers (January 2020 - March 2025), we compare research outputs of leading AI companies and universities.<n>We find that corporate AI research increasingly concentrates on pre-deployment areas -- model alignment and testing & evaluation.<n>Significant research gaps exist in high-risk deployment domains, including healthcare, finance, misinformation, persuasive and addictive features, hallucinations, and copyright.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Drawing on 1,178 safety and reliability papers from 9,439 generative AI papers (January 2020 - March 2025), we compare research outputs of leading AI companies (Anthropic, Google DeepMind, Meta, Microsoft, and OpenAI) and AI universities (CMU, MIT, NYU, Stanford, UC Berkeley, and University of Washington). We find that corporate AI research increasingly concentrates on pre-deployment areas -- model alignment and testing & evaluation -- while attention to deployment-stage issues such as model bias has waned. Significant research gaps exist in high-risk deployment domains, including healthcare, finance, misinformation, persuasive and addictive features, hallucinations, and copyright. Without improved observability into deployed AI, growing corporate concentration could deepen knowledge deficits. We recommend expanding external researcher access to deployment data and systematic observability of in-market AI behaviors.
Related papers
- Artificial Intelligence Index Report 2025 [39.08798007138432]
New in this year's report are in-depth analyses of the evolving landscape of AI hardware, novel estimates of inference costs.<n>We also introduce fresh data on corporate adoption of responsible AI practices.<n>The AI Index has been cited in major media outlets such as The New York Times, Bloomberg, and The Guardian.
arXiv Detail & Related papers (2025-04-08T02:01:37Z) - Mapping Technical Safety Research at AI Companies: A literature review and incentives analysis [0.0]
Report analyzes the technical research into safe AI development being conducted by three leading AI companies.
Anthropic, Google DeepMind, and OpenAI.
We defined safe AI development as developing AI systems that are unlikely to pose large-scale misuse or accident risks.
arXiv Detail & Related papers (2024-09-12T09:34:55Z) - The Ethics of Advanced AI Assistants [53.89899371095332]
This paper focuses on the opportunities and the ethical and societal risks posed by advanced AI assistants.
We define advanced AI assistants as artificial agents with natural language interfaces, whose function is to plan and execute sequences of actions on behalf of a user.
We consider the deployment of advanced assistants at a societal scale, focusing on cooperation, equity and access, misinformation, economic impact, the environment and how best to evaluate advanced AI assistants.
arXiv Detail & Related papers (2024-04-24T23:18:46Z) - Artificial intelligence adoption in the physical sciences, natural
sciences, life sciences, social sciences and the arts and humanities: A
bibliometric analysis of research publications from 1960-2021 [73.06361680847708]
In 1960 14% of 333 research fields were related to AI, but this increased to over half of all research fields by 1972, over 80% by 1986 and over 98% in current times.
In 1960 14% of 333 research fields were related to AI (many in computer science), but this increased to over half of all research fields by 1972, over 80% by 1986 and over 98% in current times.
We conclude that the context of the current surge appears different, and that interdisciplinary AI application is likely to be sustained.
arXiv Detail & Related papers (2023-06-15T14:08:07Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Artificial Intelligence and Life in 2030: The One Hundred Year Study on
Artificial Intelligence [74.2630823914258]
The report examines eight domains of typical urban settings on which AI is likely to have impact over the coming years.
It aims to provide the general public with a scientifically and technologically accurate portrayal of the current state of AI.
The charge for this report was given to the panel by the AI100 Standing Committee, chaired by Barbara Grosz of Harvard University.
arXiv Detail & Related papers (2022-10-31T18:35:36Z) - Proceedings of the Artificial Intelligence for Cyber Security (AICS)
Workshop at AAAI 2022 [55.573187938617636]
The workshop will focus on the application of AI to problems in cyber security.
Cyber systems generate large volumes of data, utilizing this effectively is beyond human capabilities.
arXiv Detail & Related papers (2022-02-28T18:27:41Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Ethics and Governance of Artificial Intelligence: Evidence from a Survey
of Machine Learning Researchers [0.0]
Machine learning (ML) and artificial intelligence (AI) researchers play an important role in the ethics and governance of AI.
We conducted a survey of those who published in the top AI/ML conferences.
We find that AI/ML researchers place high levels of trust in international organizations and scientific organizations.
arXiv Detail & Related papers (2021-05-05T15:23:12Z) - The De-democratization of AI: Deep Learning and the Compute Divide in
Artificial Intelligence Research [0.2855485723554975]
Large technology firms and elite universities have increased participation in major AI conferences since deep learning's unanticipated rise in 2012.
The effect is concentrated among elite universities, which are ranked 1-50 in the QS World University Rankings.
This increased presence of firms and elite universities in AI research has crowded out mid-tier (QS ranked 201-300) and lower-tier (QS ranked 301-500) universities.
arXiv Detail & Related papers (2020-10-22T15:11:14Z) - A narrowing of AI research? [0.0]
We study the evolution of the thematic diversity of AI research in academia and the private sector.
We measure the influence of private companies in AI research through the citations they receive and their collaborations with other institutions.
arXiv Detail & Related papers (2020-09-22T08:23:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.