"That's important, but...": How Computer Science Researchers Anticipate
Unintended Consequences of Their Research Innovations
- URL: http://arxiv.org/abs/2303.15536v1
- Date: Mon, 27 Mar 2023 18:21:29 GMT
- Title: "That's important, but...": How Computer Science Researchers Anticipate
Unintended Consequences of Their Research Innovations
- Authors: Kimberly Do, Rock Yuren Pang, Jiachen Jiang, Katharina Reinecke
- Abstract summary: We show that considering unintended consequences is generally seen as important but rarely practiced.
Principal barriers are a lack of formal process and strategy as well as the academic practice that prioritizes fast progress and publications.
We intend for our work to pave the way for routine explorations of the societal implications of technological innovations before, during, and after the research process.
- Score: 12.947525301829835
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Computer science research has led to many breakthrough innovations but has
also been scrutinized for enabling technology that has negative, unintended
consequences for society. Given the increasing discussions of ethics in the
news and among researchers, we interviewed 20 researchers in various CS
sub-disciplines to identify whether and how they consider potential unintended
consequences of their research innovations. We show that considering unintended
consequences is generally seen as important but rarely practiced. Principal
barriers are a lack of formal process and strategy as well as the academic
practice that prioritizes fast progress and publications. Drawing on these
findings, we discuss approaches to support researchers in routinely considering
unintended consequences, from bringing diverse perspectives through community
participation to increasing incentives to investigate potential consequences.
We intend for our work to pave the way for routine explorations of the societal
implications of technological innovations before, during, and after the
research process.
Related papers
- Now, Later, and Lasting: Ten Priorities for AI Research, Policy, and Practice [63.20307830884542]
Next several decades may well be a turning point for humanity, comparable to the industrial revolution.
Launched a decade ago, the project is committed to a perpetual series of studies by multidisciplinary experts.
We offer ten recommendations for action that collectively address both the short- and long-term potential impacts of AI technologies.
arXiv Detail & Related papers (2024-04-06T22:18:31Z) - A Disruptive Research Playbook for Studying Disruptive Innovations [11.619658523864686]
We propose a research playbook with the goal of providing a guide to formulate compelling and socially relevant research questions.
We show it can be used to question the impact of two current disruptive technologies: AI and AR/VR.
arXiv Detail & Related papers (2024-02-20T19:13:36Z) - Academic competitions [61.592427413342975]
This chapter provides a survey of academic challenges in the context of machine learning and related fields.
We review the most influential competitions in the last few years and analyze challenges per area of knowledge.
The aims of scientific challenges, their goals, major achievements and expectations for the next few years are reviewed.
arXiv Detail & Related papers (2023-12-01T01:01:04Z) - Responsible AI Considerations in Text Summarization Research: A Review
of Current Practices [89.85174013619883]
We focus on text summarization, a common NLP task largely overlooked by the responsible AI community.
We conduct a multi-round qualitative analysis of 333 summarization papers from the ACL Anthology published between 2020-2022.
We focus on how, which, and when responsible AI issues are covered, which relevant stakeholders are considered, and mismatches between stated and realized research goals.
arXiv Detail & Related papers (2023-11-18T15:35:36Z) - Human-Centered Responsible Artificial Intelligence: Current & Future
Trends [76.94037394832931]
In recent years, the CHI community has seen significant growth in research on Human-Centered Responsible Artificial Intelligence.
All of this work is aimed at developing AI that benefits humanity while being grounded in human rights and ethics, and reducing the potential harms of AI.
In this special interest group, we aim to bring together researchers from academia and industry interested in these topics to map current and future research trends.
arXiv Detail & Related papers (2023-02-16T08:59:42Z) - On the importance of AI research beyond disciplines [7.022779279820803]
It is crucial to embrace interdisciplinary knowledge to understand the impact of technology on society.
The goal is to foster a research environment beyond disciplines that values diversity and creates, critiques and develops new conceptual and theoretical frameworks.
arXiv Detail & Related papers (2023-02-13T19:39:37Z) - Fairness in Recommender Systems: Research Landscape and Future
Directions [119.67643184567623]
We review the concepts and notions of fairness that were put forward in the area in the recent past.
We present an overview of how research in this field is currently operationalized.
Overall, our analysis of recent works points to certain research gaps.
arXiv Detail & Related papers (2022-05-23T08:34:25Z) - The Privatization of AI Research(-ers): Causes and Potential
Consequences -- From university-industry interaction to public research
brain-drain? [0.0]
The private sector is playing an increasingly important role in basic Artificial Intelligence (AI) R&D.
This phenomenon is reflected in the perception of a brain drain of researchers from academia to industry.
We find a growing net flow of researchers from academia to industry, particularly from elite institutions into technology companies such as Google, Microsoft and Facebook.
arXiv Detail & Related papers (2021-02-02T18:02:41Z) - Nose to Glass: Looking In to Get Beyond [0.0]
An increasing amount of research has been conducted under the banner of enhancing responsible artificial intelligence.
Research aims to address, alleviating, and eventually mitigating the harms brought on by the roll out of algorithmic systems.
However, implementation of such tools remains low.
arXiv Detail & Related papers (2020-11-26T06:51:45Z) - Learnings from Frontier Development Lab and SpaceML -- AI Accelerators
for NASA and ESA [57.06643156253045]
Research with AI and ML technologies lives in a variety of settings with often asynchronous goals and timelines.
We perform a case study of the Frontier Development Lab (FDL), an AI accelerator under a public-private partnership from NASA and ESA.
FDL research follows principled practices that are grounded in responsible development, conduct, and dissemination of AI research.
arXiv Detail & Related papers (2020-11-09T21:23:03Z) - Evolving Methods for Evaluating and Disseminating Computing Research [4.0318506932466445]
Social and technical trends have significantly changed methods for evaluating and disseminating computing research.
Traditional venues for reviewing and publishing, such as conferences and journals, worked effectively in the past.
Many conferences have seen large increases in the number of submissions.
Dis dissemination of research ideas has become dramatically through publication venues such as arXiv.org and social media networks.
arXiv Detail & Related papers (2020-07-02T16:50:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.