Towards Leveraging News Media to Support Impact Assessment of AI Technologies
- URL: http://arxiv.org/abs/2411.02536v1
- Date: Mon, 04 Nov 2024 19:12:27 GMT
- Title: Towards Leveraging News Media to Support Impact Assessment of AI Technologies
- Authors: Mowafak Allaham, Kimon Kieslich, Nicholas Diakopoulos,
- Abstract summary: Expert-driven frameworks for impact assessments (IAs) may inadvertently overlook the effects of AI technologies on the public's social behavior, policy, and the cultural and geographical contexts shaping the perception of AI and the impacts around its use.
This research explores the potentials of fine-tuning LLMs on negative impacts of AI reported in a diverse sample of articles from 266 news domains spanning 30 countries around the world to incorporate more diversity into IAs.
- Score: 3.2566808526538873
- License:
- Abstract: Expert-driven frameworks for impact assessments (IAs) may inadvertently overlook the effects of AI technologies on the public's social behavior, policy, and the cultural and geographical contexts shaping the perception of AI and the impacts around its use. This research explores the potentials of fine-tuning LLMs on negative impacts of AI reported in a diverse sample of articles from 266 news domains spanning 30 countries around the world to incorporate more diversity into IAs. Our findings highlight (1) the potential of fine-tuned open-source LLMs in supporting IA of AI technologies by generating high-quality negative impacts across four qualitative dimensions: coherence, structure, relevance, and plausibility, and (2) the efficacy of small open-source LLM (Mistral-7B) fine-tuned on impacts from news media in capturing a wider range of categories of impacts that GPT-4 had gaps in covering.
Related papers
- From Words to Worth: Newborn Article Impact Prediction with LLM [69.41680520058418]
This paper introduces a promising approach, leveraging the capabilities of fine-tuned LLMs to predict the future impact of newborn articles.
A comprehensive dataset has been constructed and released for fine-tuning the LLM, containing over 12,000 entries with corresponding titles, abstracts, and TNCSI_SP.
arXiv Detail & Related papers (2024-08-07T17:52:02Z) - Simulating Policy Impacts: Developing a Generative Scenario Writing Method to Evaluate the Perceived Effects of Regulation [3.2566808526538873]
We use GPT-4 to generate scenarios both pre- and post-introduction of policy.
We then run a user study to evaluate these scenarios across four risk-assessment dimensions.
We find that this transparency legislation is perceived to be effective at mitigating harms in areas such as labor and well-being, but largely ineffective in areas such as social cohesion and security.
arXiv Detail & Related papers (2024-05-15T19:44:54Z) - Reduced-Rank Multi-objective Policy Learning and Optimization [57.978477569678844]
In practice, causal researchers do not have a single outcome in mind a priori.
In government-assisted social benefit programs, policymakers collect many outcomes to understand the multidimensional nature of poverty.
We present a data-driven dimensionality-reduction methodology for multiple outcomes in the context of optimal policy learning.
arXiv Detail & Related papers (2024-04-29T08:16:30Z) - Evaluating the Capabilities of LLMs for Supporting Anticipatory Impact Assessment [3.660182910533372]
We show the potential for generating high-quality impacts of AI in society by fine-tuning completion models.
We examine the generated impacts for coherence, structure, relevance, and plausibility.
We find that impacts produced by instruction-based models had gaps in the production of certain categories of impacts.
arXiv Detail & Related papers (2024-01-31T17:43:04Z) - Responsible AI Considerations in Text Summarization Research: A Review
of Current Practices [89.85174013619883]
We focus on text summarization, a common NLP task largely overlooked by the responsible AI community.
We conduct a multi-round qualitative analysis of 333 summarization papers from the ACL Anthology published between 2020-2022.
We focus on how, which, and when responsible AI issues are covered, which relevant stakeholders are considered, and mismatches between stated and realized research goals.
arXiv Detail & Related papers (2023-11-18T15:35:36Z) - Anticipating Impacts: Using Large-Scale Scenario Writing to Explore
Diverse Implications of Generative AI in the News Environment [3.660182910533372]
We aim to broaden the perspective and capture the expectations of three stakeholder groups about the potential negative impacts of generative AI.
We apply scenario writing and use participatory foresight to delve into cognitively diverse imaginations of the future.
We conclude by discussing the usefulness of scenario-writing and participatory foresight as a toolbox for generative AI impact assessment.
arXiv Detail & Related papers (2023-10-10T06:59:27Z) - Eliciting the Double-edged Impact of Digitalisation: a Case Study in
Rural Areas [1.8707139489039097]
This paper reports a case study about the impact of digitalisation in remote mountain areas, in the context of a system for ordinary land management and hydro-geological risk control.
We highlight the higher stress due to the excess of connectivity, the partial reduction of decision-making abilities, and the risk of marginalisation for certain types of stakeholders.
Our study contributes to the literature with: a set of impacts specific to the case, which can apply to similar contexts; an effective approach for impact elicitation; and a list of lessons learned from the experience.
arXiv Detail & Related papers (2023-06-08T10:01:35Z) - Cross-Domain Policy Adaptation via Value-Guided Data Filtering [57.62692881606099]
Generalizing policies across different domains with dynamics mismatch poses a significant challenge in reinforcement learning.
We present the Value-Guided Data Filtering (VGDF) algorithm, which selectively shares transitions from the source domain based on the proximity of paired value targets.
arXiv Detail & Related papers (2023-05-28T04:08:40Z) - Unpacking the Expressed Consequences of AI Research in Broader Impact
Statements [23.3030110636071]
We present the results of a thematic analysis of a sample of statements written for the 2020 Neural Information Processing Systems conference.
The themes we identify fall into categories related to how consequences are expressed and areas of impacts expressed.
In light of our results, we offer perspectives on how the broader impact statement can be implemented in future iterations to better align with potential goals.
arXiv Detail & Related papers (2021-05-11T02:57:39Z) - Heterogeneous Demand Effects of Recommendation Strategies in a Mobile
Application: Evidence from Econometric Models and Machine-Learning
Instruments [73.7716728492574]
We study the effectiveness of various recommendation strategies in the mobile channel and their impact on consumers' utility and demand levels for individual products.
We find significant differences in effectiveness among various recommendation strategies.
We develop novel econometric instruments that capture product differentiation (isolation) based on deep-learning models of user-generated reviews.
arXiv Detail & Related papers (2021-02-20T22:58:54Z) - Overcoming Failures of Imagination in AI Infused System Development and
Deployment [71.9309995623067]
NeurIPS 2020 requested that research paper submissions include impact statements on "potential nefarious uses and the consequences of failure"
We argue that frameworks of harms must be context-aware and consider a wider range of potential stakeholders, system affordances, as well as viable proxies for assessing harms in the widest sense.
arXiv Detail & Related papers (2020-11-26T18:09:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.