The Privatization of AI Research(-ers): Causes and Potential
Consequences -- From university-industry interaction to public research
brain-drain?
- URL: http://arxiv.org/abs/2102.01648v2
- Date: Mon, 15 Feb 2021 21:30:23 GMT
- Title: The Privatization of AI Research(-ers): Causes and Potential
Consequences -- From university-industry interaction to public research
brain-drain?
- Authors: Roman Jurowetzki, Daniel Hain, Juan Mateos-Garcia, Konstantinos
Stathoulopoulos
- Abstract summary: The private sector is playing an increasingly important role in basic Artificial Intelligence (AI) R&D.
This phenomenon is reflected in the perception of a brain drain of researchers from academia to industry.
We find a growing net flow of researchers from academia to industry, particularly from elite institutions into technology companies such as Google, Microsoft and Facebook.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The private sector is playing an increasingly important role in basic
Artificial Intelligence (AI) R&D. This phenomenon, which is reflected in the
perception of a brain drain of researchers from academia to industry, is
raising concerns about a privatisation of AI research which could constrain its
societal benefits. We contribute to the evidence base by quantifying transition
flows between industry and academia and studying its drivers and potential
consequences. We find a growing net flow of researchers from academia to
industry, particularly from elite institutions into technology companies such
as Google, Microsoft and Facebook. Our survival regression analysis reveals
that researchers working in the field of deep learning as well as those with
higher average impact are more likely to transition into industry. A
difference-in-differences analysis of the effect of switching into industry on
a researcher's influence proxied by citations indicates that an initial
increase in impact declines as researchers spend more time in industry. This
points at a privatisation of AI knowledge compared to a counterfactual where
those high-impact researchers had remained in academia. Our findings highlight
the importance of strengthening the public AI research sphere in order to
ensure that the future of this powerful technology is not dominated by private
interests.
Related papers
- Generative artificial intelligence usage by researchers at work: Effects of gender, career stage, type of workplace, and perceived barriers [0.0]
The integration of generative artificial intelligence technology into research environments has become increasingly common in recent years.
This paper seeks to explore the factors underlying the frequency of use of generative AI amongst researchers in their professional environments.
arXiv Detail & Related papers (2024-08-31T22:00:21Z) - The Narrow Depth and Breadth of Corporate Responsible AI Research [3.364518262921329]
We show that the majority of AI firms show limited or no engagement in this critical subfield of AI.
Leading AI firms exhibit significantly lower output in responsible AI research compared to their conventional AI research.
Our results highlight the urgent need for industry to publicly engage in responsible AI research.
arXiv Detail & Related papers (2024-05-20T17:26:43Z) - On the Opportunities of Green Computing: A Survey [80.21955522431168]
Artificial Intelligence (AI) has achieved significant advancements in technology and research with the development over several decades.
The needs for high computing power brings higher carbon emission and undermines research fairness.
To tackle the challenges of computing resources and environmental impact of AI, Green Computing has become a hot research topic.
arXiv Detail & Related papers (2023-11-01T11:16:41Z) - Analyzing the Impact of Companies on AI Research Based on Publications [1.450405446885067]
We compare academic- and company-authored AI publications published in the last decade.
We find that the citation count an individual publication receives is significantly higher when it is (co-authored) by a company.
arXiv Detail & Related papers (2023-10-31T13:27:04Z) - Human-Centered Responsible Artificial Intelligence: Current & Future
Trends [76.94037394832931]
In recent years, the CHI community has seen significant growth in research on Human-Centered Responsible Artificial Intelligence.
All of this work is aimed at developing AI that benefits humanity while being grounded in human rights and ethics, and reducing the potential harms of AI.
In this special interest group, we aim to bring together researchers from academia and industry interested in these topics to map current and future research trends.
arXiv Detail & Related papers (2023-02-16T08:59:42Z) - Characterising Research Areas in the field of AI [68.8204255655161]
We identified the main conceptual themes by performing clustering analysis on the co-occurrence network of topics.
The results highlight the growing academic interest in research themes like deep learning, machine learning, and internet of things.
arXiv Detail & Related papers (2022-05-26T16:30:30Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Artificial Intelligence for IT Operations (AIOPS) Workshop White Paper [50.25428141435537]
Artificial Intelligence for IT Operations (AIOps) is an emerging interdisciplinary field arising in the intersection between machine learning, big data, streaming analytics, and the management of IT operations.
Main aim of the AIOPS workshop is to bring together researchers from both academia and industry to present their experiences, results, and work in progress in this field.
arXiv Detail & Related papers (2021-01-15T10:43:10Z) - Nose to Glass: Looking In to Get Beyond [0.0]
An increasing amount of research has been conducted under the banner of enhancing responsible artificial intelligence.
Research aims to address, alleviating, and eventually mitigating the harms brought on by the roll out of algorithmic systems.
However, implementation of such tools remains low.
arXiv Detail & Related papers (2020-11-26T06:51:45Z) - Learnings from Frontier Development Lab and SpaceML -- AI Accelerators
for NASA and ESA [57.06643156253045]
Research with AI and ML technologies lives in a variety of settings with often asynchronous goals and timelines.
We perform a case study of the Frontier Development Lab (FDL), an AI accelerator under a public-private partnership from NASA and ESA.
FDL research follows principled practices that are grounded in responsible development, conduct, and dissemination of AI research.
arXiv Detail & Related papers (2020-11-09T21:23:03Z) - A narrowing of AI research? [0.0]
We study the evolution of the thematic diversity of AI research in academia and the private sector.
We measure the influence of private companies in AI research through the citations they receive and their collaborations with other institutions.
arXiv Detail & Related papers (2020-09-22T08:23:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.