BLIP: Facilitating the Exploration of Undesirable Consequences of Digital Technologies
- URL: http://arxiv.org/abs/2405.06783v1
- Date: Fri, 10 May 2024 19:21:19 GMT
- Title: BLIP: Facilitating the Exploration of Undesirable Consequences of Digital Technologies
- Authors: Rock Yuren Pang, Sebastin Santy, René Just, Katharina Reinecke,
- Abstract summary: We introduce BLIP, a system that extracts real-world undesirable consequences of technology from online articles.
In two user studies with 15 researchers, BLIP substantially increased the number and diversity of undesirable consequences they could list.
BLIP helped them identify undesirable consequences relevant to their ongoing projects, made them aware of undesirable consequences they "had never considered," and inspired them to reflect on their own experiences with technology.
- Score: 20.27853476331588
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Digital technologies have positively transformed society, but they have also led to undesirable consequences not anticipated at the time of design or development. We posit that insights into past undesirable consequences can help researchers and practitioners gain awareness and anticipate potential adverse effects. To test this assumption, we introduce BLIP, a system that extracts real-world undesirable consequences of technology from online articles, summarizes and categorizes them, and presents them in an interactive, web-based interface. In two user studies with 15 researchers in various computer science disciplines, we found that BLIP substantially increased the number and diversity of undesirable consequences they could list in comparison to relying on prior knowledge or searching online. Moreover, BLIP helped them identify undesirable consequences relevant to their ongoing projects, made them aware of undesirable consequences they "had never considered," and inspired them to reflect on their own experiences with technology.
Related papers
- Open Problems in Machine Unlearning for AI Safety [61.43515658834902]
Machine unlearning -- the ability to selectively forget or suppress specific types of knowledge -- has shown promise for privacy and data removal tasks.
In this paper, we identify key limitations that prevent unlearning from serving as a comprehensive solution for AI safety.
arXiv Detail & Related papers (2025-01-09T03:59:10Z) - Persuasion with Large Language Models: a Survey [49.86930318312291]
Large Language Models (LLMs) have created new disruptive possibilities for persuasive communication.
In areas such as politics, marketing, public health, e-commerce, and charitable giving, such LLM Systems have already achieved human-level or even super-human persuasiveness.
Our survey suggests that the current and future potential of LLM-based persuasion poses profound ethical and societal risks.
arXiv Detail & Related papers (2024-11-11T10:05:52Z) - Deepfake Media Forensics: State of the Art and Challenges Ahead [51.33414186878676]
AI-generated synthetic media, also called Deepfakes, have influenced so many domains, from entertainment to cybersecurity.
Deepfake detection has become a vital area of research, focusing on identifying subtle inconsistencies and artifacts with machine learning techniques.
This paper reviews the primary algorithms that address these challenges, examining their advantages, limitations, and future prospects.
arXiv Detail & Related papers (2024-08-01T08:57:47Z) - The Case for Anticipating Undesirable Consequences of Computing
Innovations Early, Often, and Across Computer Science [24.13786694863084]
Our society increasingly bears the burden of negative, if unintended, consequences of computing innovations.
Our prior work showed that many of us recognize the value of thinking preemptively about the perils our research can pose, yet we tend to address them only in hindsight.
How can we change the culture in which considering undesirable consequences of digital technology is deemed as important, but is not commonly done?
arXiv Detail & Related papers (2023-09-08T17:32:22Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - "That's important, but...": How Computer Science Researchers Anticipate
Unintended Consequences of Their Research Innovations [12.947525301829835]
We show that considering unintended consequences is generally seen as important but rarely practiced.
Principal barriers are a lack of formal process and strategy as well as the academic practice that prioritizes fast progress and publications.
We intend for our work to pave the way for routine explorations of the societal implications of technological innovations before, during, and after the research process.
arXiv Detail & Related papers (2023-03-27T18:21:29Z) - Towards Unbiased Visual Emotion Recognition via Causal Intervention [63.74095927462]
We propose a novel Emotion Recognition Network (IERN) to alleviate the negative effects brought by the dataset bias.
A series of designed tests validate the effectiveness of IERN, and experiments on three emotion benchmarks demonstrate that IERN outperforms other state-of-the-art approaches.
arXiv Detail & Related papers (2021-07-26T10:40:59Z) - Toward Explainable Users: Using NLP to Enable AI to Understand Users'
Perceptions of Cyber Attacks [2.099922236065961]
This paper is the first introducing the use of AI techniques in explaining and modeling users' behavior and their perceptions about a context.
To the best of our knowledge, this paper is the first introducing the use of AI techniques in explaining and modeling users' behavior and their perceptions about a context.
arXiv Detail & Related papers (2021-06-03T17:17:16Z) - Unpacking the Expressed Consequences of AI Research in Broader Impact
Statements [23.3030110636071]
We present the results of a thematic analysis of a sample of statements written for the 2020 Neural Information Processing Systems conference.
The themes we identify fall into categories related to how consequences are expressed and areas of impacts expressed.
In light of our results, we offer perspectives on how the broader impact statement can be implemented in future iterations to better align with potential goals.
arXiv Detail & Related papers (2021-05-11T02:57:39Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Overcoming Failures of Imagination in AI Infused System Development and
Deployment [71.9309995623067]
NeurIPS 2020 requested that research paper submissions include impact statements on "potential nefarious uses and the consequences of failure"
We argue that frameworks of harms must be context-aware and consider a wider range of potential stakeholders, system affordances, as well as viable proxies for assessing harms in the widest sense.
arXiv Detail & Related papers (2020-11-26T18:09:52Z) - Avoiding Negative Side Effects due to Incomplete Knowledge of AI Systems [35.763408055286355]
Learning to recognize and avoid negative side effects of an agent's actions is critical to improve the safety and reliability of autonomous systems.
Mitigating negative side effects is an emerging research topic that is attracting increased attention due to the rapid growth in the deployment of AI systems.
This article provides a comprehensive overview of different forms of negative side effects and the recent research efforts to address them.
arXiv Detail & Related papers (2020-08-24T16:48:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.