Industrial Limitations on Academic Freedom in Computer Science
- URL: http://arxiv.org/abs/2206.08067v1
- Date: Thu, 16 Jun 2022 10:30:55 GMT
- Title: Industrial Limitations on Academic Freedom in Computer Science
- Authors: Reuben Kirkham
- Abstract summary: A field that limits academic freedom presents the risk that the results of the work conducted within it cannot always be relied upon.
This paper discusses the range of protections that could be provided.
- Score: 6.980076213134384
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The field of computer science is perhaps uniquely connected with industry.
For example, our main publication outlets (i.e. conferences) are regularly
sponsored by large technology companies, and much of our research funding is
either directly or indirectly provided by industry. In turn, this places
potential limitations on academic freedom, which is a profound ethical concern,
yet curiously is not directly addressed within existing ethical codes. A field
that limits academic freedom presents the risk that the results of the work
conducted within it cannot always be relied upon. In the context of a field
that is perhaps unique in both its connection to industry and impact on
society, special measures are needed to address this problem. This paper
discusses the range of protections that could be provided.
Related papers
- Near to Mid-term Risks and Opportunities of Open-Source Generative AI [94.06233419171016]
Applications of Generative AI are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about potential risks and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source Generative AI.
arXiv Detail & Related papers (2024-04-25T21:14:24Z) - The Paradox of Industrial Involvement in Engineering Higher Education [0.0]
We argue that the curriculum within engineering education often lacks a deep understanding of social realities.
We establish this unusual connection with the industry that has driven engineering higher education for several decades.
We highlight the need for engineering schools to hold a more critical viewpoint.
arXiv Detail & Related papers (2024-02-26T17:35:23Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Report of the 1st Workshop on Generative AI and Law [78.62063815165968]
This report presents the takeaways of the inaugural Workshop on Generative AI and Law (GenLaw)
A cross-disciplinary group of practitioners and scholars from computer science and law convened to discuss the technical, doctrinal, and policy challenges presented by law for Generative AI.
arXiv Detail & Related papers (2023-11-11T04:13:37Z) - Regulation and NLP (RegNLP): Taming Large Language Models [51.41095330188972]
We argue how NLP research can benefit from proximity to regulatory studies and adjacent fields.
We advocate for the development of a new multidisciplinary research space on regulation and NLP.
arXiv Detail & Related papers (2023-10-09T09:22:40Z) - Artificial intelligence adoption in the physical sciences, natural
sciences, life sciences, social sciences and the arts and humanities: A
bibliometric analysis of research publications from 1960-2021 [73.06361680847708]
In 1960 14% of 333 research fields were related to AI, but this increased to over half of all research fields by 1972, over 80% by 1986 and over 98% in current times.
In 1960 14% of 333 research fields were related to AI (many in computer science), but this increased to over half of all research fields by 1972, over 80% by 1986 and over 98% in current times.
We conclude that the context of the current surge appears different, and that interdisciplinary AI application is likely to be sustained.
arXiv Detail & Related papers (2023-06-15T14:08:07Z) - Applying Standards to Advance Upstream & Downstream Ethics in Large
Language Models [0.0]
This paper explores how AI-owners can develop safeguards for AI-generated content.
It draws from established codes of conduct and ethical standards in other content-creation industries.
arXiv Detail & Related papers (2023-06-06T08:47:42Z) - The Technological Emergence of AutoML: A Survey of Performant Software
and Applications in the Context of Industry [72.10607978091492]
Automated/Autonomous Machine Learning (AutoML/AutonoML) is a relatively young field.
This review makes two primary contributions to knowledge around this topic.
It provides the most up-to-date and comprehensive survey of existing AutoML tools, both open-source and commercial.
arXiv Detail & Related papers (2022-11-08T10:42:08Z) - The Privatization of AI Research(-ers): Causes and Potential
Consequences -- From university-industry interaction to public research
brain-drain? [0.0]
The private sector is playing an increasingly important role in basic Artificial Intelligence (AI) R&D.
This phenomenon is reflected in the perception of a brain drain of researchers from academia to industry.
We find a growing net flow of researchers from academia to industry, particularly from elite institutions into technology companies such as Google, Microsoft and Facebook.
arXiv Detail & Related papers (2021-02-02T18:02:41Z) - Nose to Glass: Looking In to Get Beyond [0.0]
An increasing amount of research has been conducted under the banner of enhancing responsible artificial intelligence.
Research aims to address, alleviating, and eventually mitigating the harms brought on by the roll out of algorithmic systems.
However, implementation of such tools remains low.
arXiv Detail & Related papers (2020-11-26T06:51:45Z) - The Grey Hoodie Project: Big Tobacco, Big Tech, and the threat on
academic integrity [3.198144010381572]
We show how Big Tech can actively distort the academic landscape to suit its needs.
By comparing the well-studied actions of another industry (Big Tobacco) to the current actions of Big Tech we see similar strategies employed by both industries.
We examine the funding of academic research as a tool used by Big Tech to put forward a socially responsible public image.
arXiv Detail & Related papers (2020-09-28T23:00:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.