Big Tech influence over AI research revisited: memetic analysis of
attribution of ideas to affiliation
- URL: http://arxiv.org/abs/2312.12881v1
- Date: Wed, 20 Dec 2023 09:45:44 GMT
- Title: Big Tech influence over AI research revisited: memetic analysis of
attribution of ideas to affiliation
- Authors: Stanis{\l}aw Gizi\'nski, Paulina Kaczy\'nska, Hubert Ruczy\'nski,
Emilia Wi\'snios, Bartosz Pieli\'nski, Przemys{\l}aw Biecek, Julian
Sienkiewicz
- Abstract summary: This paper aims to broaden and deepen our understanding of Big Tech's reach and power within AI research.
By employing network and memetic analysis on AI-oriented paper abstracts and their citation network, we are able to grasp a deeper insight into this phenomenon.
Our findings suggest, that while Big Tech-affiliated papers are disproportionately more cited in some areas, the most cited papers are those affiliated with both Big Tech and Academia.
- Score: 3.958317527488534
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: There exists a growing discourse around the domination of Big Tech on the
landscape of artificial intelligence (AI) research, yet our comprehension of
this phenomenon remains cursory. This paper aims to broaden and deepen our
understanding of Big Tech's reach and power within AI research. It highlights
the dominance not merely in terms of sheer publication volume but rather in the
propagation of new ideas or \textit{memes}. Current studies often oversimplify
the concept of influence to the share of affiliations in academic papers,
typically sourced from limited databases such as arXiv or specific academic
conferences.
The main goal of this paper is to unravel the specific nuances of such
influence, determining which AI ideas are predominantly driven by Big Tech
entities. By employing network and memetic analysis on AI-oriented paper
abstracts and their citation network, we are able to grasp a deeper insight
into this phenomenon. By utilizing two databases: OpenAlex and S2ORC, we are
able to perform such analysis on a much bigger scale than previous attempts.
Our findings suggest, that while Big Tech-affiliated papers are
disproportionately more cited in some areas, the most cited papers are those
affiliated with both Big Tech and Academia. Focusing on the most contagious
memes, their attribution to specific affiliation groups (Big Tech, Academia,
mixed affiliation) seems to be equally distributed between those three groups.
This suggests that the notion of Big Tech domination over AI research is
oversimplified in the discourse.
Ultimately, this more nuanced understanding of Big Tech's and Academia's
influence could inform a more symbiotic alliance between these stakeholders
which would better serve the dual goals of societal welfare and the scientific
integrity of AI research.
Related papers
- Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Hype, Sustainability, and the Price of the Bigger-is-Better Paradigm in AI [67.58673784790375]
We argue that the 'bigger is better' AI paradigm is not only fragile scientifically, but comes with undesirable consequences.
First, it is not sustainable, as its compute demands increase faster than model performance, leading to unreasonable economic requirements and a disproportionate environmental footprint.
Second, it implies focusing on certain problems at the expense of others, leaving aside important applications, e.g. health, education, or the climate.
arXiv Detail & Related papers (2024-09-21T14:43:54Z) - Analyzing the Impact of Companies on AI Research Based on Publications [1.450405446885067]
We compare academic- and company-authored AI publications published in the last decade.
We find that the citation count an individual publication receives is significantly higher when it is (co-authored) by a company.
arXiv Detail & Related papers (2023-10-31T13:27:04Z) - Identifying and Mitigating the Security Risks of Generative AI [179.2384121957896]
This paper reports the findings of a workshop held at Google on the dual-use dilemma posed by GenAI.
GenAI can be used just as well by attackers to generate new attacks and increase the velocity and efficacy of existing attacks.
We discuss short-term and long-term goals for the community on this topic.
arXiv Detail & Related papers (2023-08-28T18:51:09Z) - Selected Trends in Artificial Intelligence for Space Applications [69.3474006357492]
This chapter focuses on differentiable intelligence and on-board machine learning.
We discuss a few selected projects originating from the European Space Agency's (ESA) Advanced Concepts Team (ACT)
arXiv Detail & Related papers (2022-12-10T07:49:50Z) - The History of AI Rights Research [0.0]
Report documents the history of research on AI rights and other moral consideration of artificial entities.
It highlights key intellectual influences on this literature as well as research and academic discussion addressing the topic more directly.
arXiv Detail & Related papers (2022-07-06T17:52:27Z) - Characterising Research Areas in the field of AI [68.8204255655161]
We identified the main conceptual themes by performing clustering analysis on the co-occurrence network of topics.
The results highlight the growing academic interest in research themes like deep learning, machine learning, and internet of things.
arXiv Detail & Related papers (2022-05-26T16:30:30Z) - Threat of Adversarial Attacks on Deep Learning in Computer Vision:
Survey II [86.51135909513047]
Deep Learning is vulnerable to adversarial attacks that can manipulate its predictions.
This article reviews the contributions made by the computer vision community in adversarial attacks on deep learning.
It provides definitions of technical terminologies for non-experts in this domain.
arXiv Detail & Related papers (2021-08-01T08:54:47Z) - The De-democratization of AI: Deep Learning and the Compute Divide in
Artificial Intelligence Research [0.2855485723554975]
Large technology firms and elite universities have increased participation in major AI conferences since deep learning's unanticipated rise in 2012.
The effect is concentrated among elite universities, which are ranked 1-50 in the QS World University Rankings.
This increased presence of firms and elite universities in AI research has crowded out mid-tier (QS ranked 201-300) and lower-tier (QS ranked 301-500) universities.
arXiv Detail & Related papers (2020-10-22T15:11:14Z) - The Grey Hoodie Project: Big Tobacco, Big Tech, and the threat on
academic integrity [3.198144010381572]
We show how Big Tech can actively distort the academic landscape to suit its needs.
By comparing the well-studied actions of another industry (Big Tobacco) to the current actions of Big Tech we see similar strategies employed by both industries.
We examine the funding of academic research as a tool used by Big Tech to put forward a socially responsible public image.
arXiv Detail & Related papers (2020-09-28T23:00:49Z) - Machine Identification of High Impact Research through Text and Image
Analysis [0.4737991126491218]
We present a system to automatically separate papers with a high from those with a low likelihood of gaining citations.
Our system uses both a visual classifier, useful for surmising a document's overall appearance, and a text classifier, for making content-informed decisions.
arXiv Detail & Related papers (2020-05-20T19:12:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.