Using Scenario-Writing for Identifying and Mitigating Impacts of Generative AI
- URL: http://arxiv.org/abs/2410.23704v1
- Date: Thu, 31 Oct 2024 07:48:58 GMT
- Title: Using Scenario-Writing for Identifying and Mitigating Impacts of Generative AI
- Authors: Kimon Kieslich, Nicholas Diakopoulos, Natali Helberger,
- Abstract summary: Impact assessments have emerged as a common way to identify the negative and positive implications of AI deployment.
But it is also essential to critically interrogate the current literature and practice on impact assessment.
In this provocation, we do just that by first critiquing the current impact assessment literature and then proposing a novel approach.
- Score: 3.2566808526538873
- License:
- Abstract: Impact assessments have emerged as a common way to identify the negative and positive implications of AI deployment, with the goal of avoiding the downsides of its use. It is undeniable that impact assessments are important - especially in the case of rapidly proliferating technologies such as generative AI. But it is also essential to critically interrogate the current literature and practice on impact assessment, to identify its shortcomings, and to develop new approaches that are responsive to these limitations. In this provocation, we do just that by first critiquing the current impact assessment literature and then proposing a novel approach that addresses our concerns: Scenario-Based Sociotechnical Envisioning.
Related papers
- On the Effectiveness of Adversarial Training on Malware Classifiers [14.069462668836328]
Adversarial Training (AT) has been widely applied to harden learning-based classifiers against adversarial evasive attacks.
Previous work seems to suggest robustness is a task-dependent property of AT.
We argue it is a more complex problem that requires exploring AT and the intertwined roles played by certain factors within data.
arXiv Detail & Related papers (2024-12-24T06:55:53Z) - Adversarial Training: A Survey [130.89534734092388]
Adversarial training (AT) refers to integrating adversarial examples into the training process.
Recent studies have demonstrated the effectiveness of AT in improving the robustness of deep neural networks against diverse adversarial attacks.
arXiv Detail & Related papers (2024-10-19T08:57:35Z) - IntOPE: Off-Policy Evaluation in the Presence of Interference [23.167697252901398]
Off-Policy Evaluation (OPE) is employed to assess the potential impact of a hypothetical policy.
IntIPW is an IPW-style estimator that integrates marginalized importance weights to account for both individual actions and the influence of adjacent entities.
arXiv Detail & Related papers (2024-08-24T06:07:25Z) - Simulating Policy Impacts: Developing a Generative Scenario Writing Method to Evaluate the Perceived Effects of Regulation [3.2566808526538873]
We use GPT-4 to generate scenarios both pre- and post-introduction of policy.
We then run a user study to evaluate these scenarios across four risk-assessment dimensions.
We find that this transparency legislation is perceived to be effective at mitigating harms in areas such as labor and well-being, but largely ineffective in areas such as social cohesion and security.
arXiv Detail & Related papers (2024-05-15T19:44:54Z) - Responsible AI Considerations in Text Summarization Research: A Review
of Current Practices [89.85174013619883]
We focus on text summarization, a common NLP task largely overlooked by the responsible AI community.
We conduct a multi-round qualitative analysis of 333 summarization papers from the ACL Anthology published between 2020-2022.
We focus on how, which, and when responsible AI issues are covered, which relevant stakeholders are considered, and mismatches between stated and realized research goals.
arXiv Detail & Related papers (2023-11-18T15:35:36Z) - Anticipating Impacts: Using Large-Scale Scenario Writing to Explore
Diverse Implications of Generative AI in the News Environment [3.660182910533372]
We aim to broaden the perspective and capture the expectations of three stakeholder groups about the potential negative impacts of generative AI.
We apply scenario writing and use participatory foresight to delve into cognitively diverse imaginations of the future.
We conclude by discussing the usefulness of scenario-writing and participatory foresight as a toolbox for generative AI impact assessment.
arXiv Detail & Related papers (2023-10-10T06:59:27Z) - Eliciting the Double-edged Impact of Digitalisation: a Case Study in
Rural Areas [1.8707139489039097]
This paper reports a case study about the impact of digitalisation in remote mountain areas, in the context of a system for ordinary land management and hydro-geological risk control.
We highlight the higher stress due to the excess of connectivity, the partial reduction of decision-making abilities, and the risk of marginalisation for certain types of stakeholders.
Our study contributes to the literature with: a set of impacts specific to the case, which can apply to similar contexts; an effective approach for impact elicitation; and a list of lessons learned from the experience.
arXiv Detail & Related papers (2023-06-08T10:01:35Z) - Balancing detectability and performance of attacks on the control
channel of Markov Decision Processes [77.66954176188426]
We investigate the problem of designing optimal stealthy poisoning attacks on the control channel of Markov decision processes (MDPs)
This research is motivated by the recent interest of the research community for adversarial and poisoning attacks applied to MDPs, and reinforcement learning (RL) methods.
arXiv Detail & Related papers (2021-09-15T09:13:10Z) - Towards Unbiased Visual Emotion Recognition via Causal Intervention [63.74095927462]
We propose a novel Emotion Recognition Network (IERN) to alleviate the negative effects brought by the dataset bias.
A series of designed tests validate the effectiveness of IERN, and experiments on three emotion benchmarks demonstrate that IERN outperforms other state-of-the-art approaches.
arXiv Detail & Related papers (2021-07-26T10:40:59Z) - Overcoming Failures of Imagination in AI Infused System Development and
Deployment [71.9309995623067]
NeurIPS 2020 requested that research paper submissions include impact statements on "potential nefarious uses and the consequences of failure"
We argue that frameworks of harms must be context-aware and consider a wider range of potential stakeholders, system affordances, as well as viable proxies for assessing harms in the widest sense.
arXiv Detail & Related papers (2020-11-26T18:09:52Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.