The Grey Hoodie Project: Big Tobacco, Big Tech, and the threat on
academic integrity
- URL: http://arxiv.org/abs/2009.13676v4
- Date: Tue, 27 Apr 2021 12:29:44 GMT
- Title: The Grey Hoodie Project: Big Tobacco, Big Tech, and the threat on
academic integrity
- Authors: Mohamed Abdalla and Moustafa Abdalla
- Abstract summary: We show how Big Tech can actively distort the academic landscape to suit its needs.
By comparing the well-studied actions of another industry (Big Tobacco) to the current actions of Big Tech we see similar strategies employed by both industries.
We examine the funding of academic research as a tool used by Big Tech to put forward a socially responsible public image.
- Score: 3.198144010381572
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As governmental bodies rely on academics' expert advice to shape policy
regarding Artificial Intelligence, it is important that these academics not
have conflicts of interests that may cloud or bias their judgement. Our work
explores how Big Tech can actively distort the academic landscape to suit its
needs. By comparing the well-studied actions of another industry (Big Tobacco)
to the current actions of Big Tech we see similar strategies employed by both
industries. These strategies enable either industry to sway and influence
academic and public discourse. We examine the funding of academic research as a
tool used by Big Tech to put forward a socially responsible public image,
influence events hosted by and decisions made by funded universities, influence
the research questions and plans of individual scientists, and discover
receptive academics who can be leveraged. We demonstrate how Big Tech can
affect academia from the institutional level down to individual researchers.
Thus, we believe that it is vital, particularly for universities and other
institutions of higher learning, to discuss the appropriateness and the
tradeoffs of accepting funding from Big Tech, and what limitations or
conditions should be put in place.
Related papers
- Transforming Science with Large Language Models: A Survey on AI-assisted Scientific Discovery, Experimentation, Content Generation, and Evaluation [58.064940977804596]
A plethora of new AI models and tools has been proposed, promising to empower researchers and academics worldwide to conduct their research more effectively and efficiently.
Ethical concerns regarding shortcomings of these tools and potential for misuse take a particularly prominent place in our discussion.
arXiv Detail & Related papers (2025-02-07T18:26:45Z) - Auto-assessment of assessment: A conceptual framework towards fulfilling the policy gaps in academic assessment practices [4.770873744131964]
We surveyed 117 academics from three countries (UK, UAE, and Iraq)
We identified that most academics retain positive opinions regarding AI in education.
For the first time, we propose a novel AI framework for autonomously evaluating students' work.
arXiv Detail & Related papers (2024-10-28T15:22:37Z) - Do Responsible AI Artifacts Advance Stakeholder Goals? Four Key Barriers Perceived by Legal and Civil Stakeholders [59.17981603969404]
The responsible AI (RAI) community has introduced numerous processes and artifacts to facilitate transparency and support the governance of AI systems.
We conduct semi-structured interviews with 19 government, legal, and civil society stakeholders who inform policy and advocacy around responsible AI efforts.
We organize these beliefs into four barriers that help explain how RAI artifacts may (inadvertently) reconfigure power relations across civil society, government, and industry.
arXiv Detail & Related papers (2024-08-22T00:14:37Z) - Now, Later, and Lasting: Ten Priorities for AI Research, Policy, and Practice [63.20307830884542]
Next several decades may well be a turning point for humanity, comparable to the industrial revolution.
Launched a decade ago, the project is committed to a perpetual series of studies by multidisciplinary experts.
We offer ten recommendations for action that collectively address both the short- and long-term potential impacts of AI technologies.
arXiv Detail & Related papers (2024-04-06T22:18:31Z) - Big Tech influence over AI research revisited: memetic analysis of attribution of ideas to affiliation [0.0]
This paper aims to broaden and deepen our understanding of Big Tech's reach and power within AI research.
It highlights the dominance not merely in terms of sheer publication volume but rather in the propagation of new ideas or memes.
arXiv Detail & Related papers (2023-12-20T09:45:44Z) - Academic competitions [61.592427413342975]
This chapter provides a survey of academic challenges in the context of machine learning and related fields.
We review the most influential competitions in the last few years and analyze challenges per area of knowledge.
The aims of scientific challenges, their goals, major achievements and expectations for the next few years are reviewed.
arXiv Detail & Related papers (2023-12-01T01:01:04Z) - On the Opportunities of Green Computing: A Survey [80.21955522431168]
Artificial Intelligence (AI) has achieved significant advancements in technology and research with the development over several decades.
The needs for high computing power brings higher carbon emission and undermines research fairness.
To tackle the challenges of computing resources and environmental impact of AI, Green Computing has become a hot research topic.
arXiv Detail & Related papers (2023-11-01T11:16:41Z) - Human-Centered Responsible Artificial Intelligence: Current & Future
Trends [76.94037394832931]
In recent years, the CHI community has seen significant growth in research on Human-Centered Responsible Artificial Intelligence.
All of this work is aimed at developing AI that benefits humanity while being grounded in human rights and ethics, and reducing the potential harms of AI.
In this special interest group, we aim to bring together researchers from academia and industry interested in these topics to map current and future research trends.
arXiv Detail & Related papers (2023-02-16T08:59:42Z) - The History of AI Rights Research [0.0]
Report documents the history of research on AI rights and other moral consideration of artificial entities.
It highlights key intellectual influences on this literature as well as research and academic discussion addressing the topic more directly.
arXiv Detail & Related papers (2022-07-06T17:52:27Z) - Big Tech Companies Impact on Research at the Faculty of Information
Technology and Electrical Engineering [15.068124449703435]
With the growth in technology, challenges associated with ethics also grow.
Big tech companies have influence over AI ethics as many influencing ethical-AI researchers have roots in Big Tech or its associated labs.
arXiv Detail & Related papers (2022-04-10T12:28:08Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - The Privatization of AI Research(-ers): Causes and Potential
Consequences -- From university-industry interaction to public research
brain-drain? [0.0]
The private sector is playing an increasingly important role in basic Artificial Intelligence (AI) R&D.
This phenomenon is reflected in the perception of a brain drain of researchers from academia to industry.
We find a growing net flow of researchers from academia to industry, particularly from elite institutions into technology companies such as Google, Microsoft and Facebook.
arXiv Detail & Related papers (2021-02-02T18:02:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.