It's Time to Do Something: Mitigating the Negative Impacts of Computing
Through a Change to the Peer Review Process
- URL: http://arxiv.org/abs/2112.09544v1
- Date: Fri, 17 Dec 2021 14:51:57 GMT
- Title: It's Time to Do Something: Mitigating the Negative Impacts of Computing
Through a Change to the Peer Review Process
- Authors: Brent Hecht, Lauren Wilcox, Jeffrey P. Bigham, Johannes Sch\"oning,
Ehsan Hoque, Jason Ernst, Yonatan Bisk, Luigi De Russis, Lana Yarosh, Bushra
Anjum, Danish Contractor, Cathy Wu
- Abstract summary: We believe we can achieve substantial progress with a straightforward step: making a small change to the peer review process.
As we explain below, we hypothesize that our recommended change will force computing researchers to more deeply consider the negative impacts of their work.
We also expect that this change will incentivize research and policy that alleviates computing's negative impacts.
- Score: 39.25245714097946
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The computing research community needs to work much harder to address the
downsides of our innovations. Between the erosion of privacy, threats to
democracy, and automation's effect on employment (among many other issues), we
can no longer simply assume that our research will have a net positive impact
on the world. While bending the arc of computing innovation towards societal
benefit may at first seem intractable, we believe we can achieve substantial
progress with a straightforward step: making a small change to the peer review
process. As we explain below, we hypothesize that our recommended change will
force computing researchers to more deeply consider the negative impacts of
their work. We also expect that this change will incentivize research and
policy that alleviates computing's negative impacts.
Related papers
- From Efficiency Gains to Rebound Effects: The Problem of Jevons' Paradox in AI's Polarized Environmental Debate [69.05573887799203]
Much of this debate has concentrated on direct impact without addressing the significant indirect effects.
This paper examines how the problem of Jevons' Paradox applies to AI, whereby efficiency gains may paradoxically spur increased consumption.
We argue that understanding these second-order impacts requires an interdisciplinary approach, combining lifecycle assessments with socio-economic analyses.
arXiv Detail & Related papers (2025-01-27T22:45:06Z) - Understanding the Impact of Negative Prompts: When and How Do They Take Effect? [92.53724347718173]
This paper presents the first comprehensive study to uncover how and when negative prompts take effect.
Our empirical analysis identifies two primary behaviors of negative prompts.
Negative prompts can facilitate object inpainting with minimal alterations to the background via a simple adaptive algorithm.
arXiv Detail & Related papers (2024-06-05T05:42:46Z) - The Case for Anticipating Undesirable Consequences of Computing
Innovations Early, Often, and Across Computer Science [24.13786694863084]
Our society increasingly bears the burden of negative, if unintended, consequences of computing innovations.
Our prior work showed that many of us recognize the value of thinking preemptively about the perils our research can pose, yet we tend to address them only in hindsight.
How can we change the culture in which considering undesirable consequences of digital technology is deemed as important, but is not commonly done?
arXiv Detail & Related papers (2023-09-08T17:32:22Z) - Rethinking People Analytics With Inverse Transparency by Design [57.67333075002697]
We propose a new design approach for workforce analytics we refer to as inverse transparency by design.
We find that architectural changes are made without inhibiting core functionality.
We conclude that inverse transparency by design is a promising approach to realize accepted and responsible people analytics.
arXiv Detail & Related papers (2023-05-16T21:37:35Z) - "That's important, but...": How Computer Science Researchers Anticipate
Unintended Consequences of Their Research Innovations [12.947525301829835]
We show that considering unintended consequences is generally seen as important but rarely practiced.
Principal barriers are a lack of formal process and strategy as well as the academic practice that prioritizes fast progress and publications.
We intend for our work to pave the way for routine explorations of the societal implications of technological innovations before, during, and after the research process.
arXiv Detail & Related papers (2023-03-27T18:21:29Z) - Learning to Influence Human Behavior with Offline Reinforcement Learning [70.7884839812069]
We focus on influence in settings where there is a need to capture human suboptimality.
Experiments online with humans is potentially unsafe, and creating a high-fidelity simulator of the environment is often impractical.
We show that offline reinforcement learning can learn to effectively influence suboptimal humans by extending and combining elements of observed human-human behavior.
arXiv Detail & Related papers (2023-03-03T23:41:55Z) - What's the Harm? Sharp Bounds on the Fraction Negatively Affected by
Treatment [58.442274475425144]
We develop a robust inference algorithm that is efficient almost regardless of how and how fast these functions are learned.
We demonstrate our method in simulation studies and in a case study of career counseling for the unemployed.
arXiv Detail & Related papers (2022-05-20T17:36:33Z) - The Privatization of AI Research(-ers): Causes and Potential
Consequences -- From university-industry interaction to public research
brain-drain? [0.0]
The private sector is playing an increasingly important role in basic Artificial Intelligence (AI) R&D.
This phenomenon is reflected in the perception of a brain drain of researchers from academia to industry.
We find a growing net flow of researchers from academia to industry, particularly from elite institutions into technology companies such as Google, Microsoft and Facebook.
arXiv Detail & Related papers (2021-02-02T18:02:41Z) - What Can We Do to Improve Peer Review in NLP? [69.11622020605431]
We argue that a part of the problem is that the reviewers and area chairs face a poorly defined task forcing apples-to-oranges comparisons.
There are several potential ways forward, but the key difficulty is creating the incentives and mechanisms for their consistent implementation in the NLP community.
arXiv Detail & Related papers (2020-10-08T09:32:21Z) - Mediating Artificial Intelligence Developments through Negative and
Positive Incentives [5.0066859598912945]
We investigate how positive (rewards) and negative (punishments) incentives may beneficially influence the outcomes.
We show that, in several scenarios, rewarding those that follow safety measures may increase the development speed while ensuring safe choices.
arXiv Detail & Related papers (2020-10-01T13:43:32Z) - Evolving Methods for Evaluating and Disseminating Computing Research [4.0318506932466445]
Social and technical trends have significantly changed methods for evaluating and disseminating computing research.
Traditional venues for reviewing and publishing, such as conferences and journals, worked effectively in the past.
Many conferences have seen large increases in the number of submissions.
Dis dissemination of research ideas has become dramatically through publication venues such as arXiv.org and social media networks.
arXiv Detail & Related papers (2020-07-02T16:50:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.