Addressing the Unforeseen Harms of Technology CCC Whitepaper
- URL: http://arxiv.org/abs/2408.06431v1
- Date: Mon, 12 Aug 2024 18:16:37 GMT
- Title: Addressing the Unforeseen Harms of Technology CCC Whitepaper
- Authors: Nadya Bliss, Kevin Butler, David Danks, Ufuk Topcu, Matthew Turk,
- Abstract summary: This whitepaper explores how to address possible harmful consequences of computing technologies.
It starts from the assumption that very few harms due to technology are intentional or deliberate.
There are concrete steps that can be taken to address the difficult problem of anticipating and responding to potential harms from new technologies.
- Score: 17.079055802371435
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent years have seen increased awareness of the potential significant impacts of computing technologies, both positive and negative. This whitepaper explores how to address possible harmful consequences of computing technologies that might be difficult to anticipate, and thereby mitigate or address. It starts from the assumption that very few harms due to technology are intentional or deliberate; rather, the vast majority result from failure to recognize and respond to them prior to deployment. Nonetheless, there are concrete steps that can be taken to address the difficult problem of anticipating and responding to potential harms from new technologies.
Related papers
- The Butterfly Effect of Technology: How Various Factors accelerate or hinder the Arrival of Technological Singularity [0.0]
This article explores the concept of technological singularity and the factors that could accelerate or hinder its arrival.
The butterfly effect is used as a framework to understand how seemingly small changes in complex systems can have significant and unpredictable outcomes.
arXiv Detail & Related papers (2025-02-16T11:38:35Z) - Open Problems in Machine Unlearning for AI Safety [61.43515658834902]
Machine unlearning -- the ability to selectively forget or suppress specific types of knowledge -- has shown promise for privacy and data removal tasks.
In this paper, we identify key limitations that prevent unlearning from serving as a comprehensive solution for AI safety.
arXiv Detail & Related papers (2025-01-09T03:59:10Z) - Misrepresented Technological Solutions in Imagined Futures: The Origins and Dangers of AI Hype in the Research Community [0.060998359915727114]
We look at the origins and risks of AI hype to the research community and society more broadly.
We propose a set of measures that researchers, regulators, and the public can take to mitigate these risks and reduce the prevalence of unfounded claims about the technology.
arXiv Detail & Related papers (2024-08-08T20:47:17Z) - BLIP: Facilitating the Exploration of Undesirable Consequences of Digital Technologies [20.27853476331588]
We introduce BLIP, a system that extracts real-world undesirable consequences of technology from online articles.
In two user studies with 15 researchers, BLIP substantially increased the number and diversity of undesirable consequences they could list.
BLIP helped them identify undesirable consequences relevant to their ongoing projects, made them aware of undesirable consequences they "had never considered," and inspired them to reflect on their own experiences with technology.
arXiv Detail & Related papers (2024-05-10T19:21:19Z) - A Scalable and Automated Framework for Tracking the likely Adoption of
Emerging Technologies [3.4530027457862]
This paper presents a scalable and automated framework for tracking likely adoption and/or rejection of new technologies from a large landscape of adopters.
A large corpus of social media texts containing references to emerging technologies was compiled.
The expression of positive sentiment infers an increase in the likelihood of impacting a technology user's acceptance to adopt, integrate, and/or use the technology, and negative sentiment infers an increase in the likelihood of impacting the rejection of emerging technologies by adopters.
arXiv Detail & Related papers (2024-01-16T16:42:14Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - The Case for Anticipating Undesirable Consequences of Computing
Innovations Early, Often, and Across Computer Science [24.13786694863084]
Our society increasingly bears the burden of negative, if unintended, consequences of computing innovations.
Our prior work showed that many of us recognize the value of thinking preemptively about the perils our research can pose, yet we tend to address them only in hindsight.
How can we change the culture in which considering undesirable consequences of digital technology is deemed as important, but is not commonly done?
arXiv Detail & Related papers (2023-09-08T17:32:22Z) - Proceedings of the Artificial Intelligence for Cyber Security (AICS)
Workshop at AAAI 2022 [55.573187938617636]
The workshop will focus on the application of AI to problems in cyber security.
Cyber systems generate large volumes of data, utilizing this effectively is beyond human capabilities.
arXiv Detail & Related papers (2022-02-28T18:27:41Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - The Rise of Technology in Crime Prevention: Opportunities, Challenges
and Practitioners Perspectives [2.283665431721732]
It is a moral obligation for the research community to consider how the contemporary technological developments might help reduce crime worldwide.
This paper provides a discussion of how a sample of contemporary hardware and software-based technologies might help further reduce criminal actions.
After a thorough analysis of a wide array of technologies, we believe that the adoption of novel technologies by vulnerable individuals, victim support organisations and law enforcement can help reduce the occurrence of criminal activity.
arXiv Detail & Related papers (2021-01-26T16:02:40Z) - Overcoming Failures of Imagination in AI Infused System Development and
Deployment [71.9309995623067]
NeurIPS 2020 requested that research paper submissions include impact statements on "potential nefarious uses and the consequences of failure"
We argue that frameworks of harms must be context-aware and consider a wider range of potential stakeholders, system affordances, as well as viable proxies for assessing harms in the widest sense.
arXiv Detail & Related papers (2020-11-26T18:09:52Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.