Evaluating the Capabilities of LLMs for Supporting Anticipatory Impact Assessment
- URL: http://arxiv.org/abs/2401.18028v2
- Date: Mon, 20 May 2024 23:34:39 GMT
- Title: Evaluating the Capabilities of LLMs for Supporting Anticipatory Impact Assessment
- Authors: Mowafak Allaham, Nicholas Diakopoulos,
- Abstract summary: We show the potential for generating high-quality impacts of AI in society by fine-tuning completion models.
We examine the generated impacts for coherence, structure, relevance, and plausibility.
We find that impacts produced by instruction-based models had gaps in the production of certain categories of impacts.
- Score: 3.660182910533372
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Gaining insight into the potential negative impacts of emerging Artificial Intelligence (AI) technologies in society is a challenge for implementing anticipatory governance approaches. One approach to produce such insight is to use Large Language Models (LLMs) to support and guide experts in the process of ideating and exploring the range of undesirable consequences of emerging technologies. However, performance evaluations of LLMs for such tasks are still needed, including examining the general quality of generated impacts but also the range of types of impacts produced and resulting biases. In this paper, we demonstrate the potential for generating high-quality and diverse impacts of AI in society by fine-tuning completion models (GPT-3 and Mistral-7B) on a diverse sample of articles from news media and comparing those outputs to the impacts generated by instruction-based (GPT-4 and Mistral-7B-Instruct) models. We examine the generated impacts for coherence, structure, relevance, and plausibility and find that the generated impacts using Mistral-7B, a small open-source model fine-tuned on impacts from the news media, tend to be qualitatively on par with impacts generated using a more capable and larger scale model such as GPT-4. Moreover, we find that impacts produced by instruction-based models had gaps in the production of certain categories of impacts in comparison to fine-tuned models. This research highlights a potential bias in the range of impacts generated by state-of-the-art LLMs and the potential of aligning smaller LLMs on news media as a scalable alternative to generate high quality and more diverse impacts in support of anticipatory governance approaches.
Related papers
- Towards Leveraging News Media to Support Impact Assessment of AI Technologies [3.2566808526538873]
Expert-driven frameworks for impact assessments (IAs) may inadvertently overlook the effects of AI technologies on the public's social behavior, policy, and the cultural and geographical contexts shaping the perception of AI and the impacts around its use.
This research explores the potentials of fine-tuning LLMs on negative impacts of AI reported in a diverse sample of articles from 266 news domains spanning 30 countries around the world to incorporate more diversity into IAs.
arXiv Detail & Related papers (2024-11-04T19:12:27Z) - DAG-aware Transformer for Causal Effect Estimation [0.8192907805418583]
Causal inference is a critical task across fields such as healthcare, economics, and the social sciences.
In this paper, we present a novel transformer-based method for causal inference that overcomes these challenges.
The core innovation of our model lies in its integration of causal Directed Acyclic Graphs (DAGs) directly into the attention mechanism.
arXiv Detail & Related papers (2024-10-13T23:17:58Z) - Enhancing Training Data Attribution for Large Language Models with Fitting Error Consideration [74.09687562334682]
We introduce a novel training data attribution method called Debias and Denoise Attribution (DDA)
Our method significantly outperforms existing approaches, achieving an averaged AUC of 91.64%.
DDA exhibits strong generality and scalability across various sources and different-scale models like LLaMA2, QWEN2, and Mistral.
arXiv Detail & Related papers (2024-10-02T07:14:26Z) - Using Generative Models to Produce Realistic Populations of the United Kingdom Windstorms [0.0]
dissertation explores the application of generative models to produce realistic synthetic wind field data.
Three models, including standard GANs, WGAN-GP, and U-net diffusion models, were employed to generate wind maps of the UK.
The results reveal that while all models are effective in capturing the general spatial characteristics, each model exhibits distinct strengths and weaknesses.
arXiv Detail & Related papers (2024-09-16T19:53:33Z) - Explanatory Model Monitoring to Understand the Effects of Feature Shifts on Performance [61.06245197347139]
We propose a novel approach to explain the behavior of a black-box model under feature shifts.
We refer to our method that combines concepts from Optimal Transport and Shapley Values as Explanatory Performance Estimation.
arXiv Detail & Related papers (2024-08-24T18:28:19Z) - Comprehensive Reassessment of Large-Scale Evaluation Outcomes in LLMs: A Multifaceted Statistical Approach [64.42462708687921]
Evaluations have revealed that factors such as scaling, training types, architectures and other factors profoundly impact the performance of LLMs.
Our study embarks on a thorough re-examination of these LLMs, targeting the inadequacies in current evaluation methods.
This includes the application of ANOVA, Tukey HSD tests, GAMM, and clustering technique.
arXiv Detail & Related papers (2024-03-22T14:47:35Z) - Robustness and Generalization Performance of Deep Learning Models on
Cyber-Physical Systems: A Comparative Study [71.84852429039881]
Investigation focuses on the models' ability to handle a range of perturbations, such as sensor faults and noise.
We test the generalization and transfer learning capabilities of these models by exposing them to out-of-distribution (OOD) samples.
arXiv Detail & Related papers (2023-06-13T12:43:59Z) - Predictability and Surprise in Large Generative Models [8.055204456718576]
Large-scale pre-training has emerged as a technique for creating capable, general purpose, generative models.
In this paper, we highlight a counterintuitive property of such models and discuss the policy implications of this property.
arXiv Detail & Related papers (2022-02-15T23:21:23Z) - Unpacking the Expressed Consequences of AI Research in Broader Impact
Statements [23.3030110636071]
We present the results of a thematic analysis of a sample of statements written for the 2020 Neural Information Processing Systems conference.
The themes we identify fall into categories related to how consequences are expressed and areas of impacts expressed.
In light of our results, we offer perspectives on how the broader impact statement can be implemented in future iterations to better align with potential goals.
arXiv Detail & Related papers (2021-05-11T02:57:39Z) - Heterogeneous Demand Effects of Recommendation Strategies in a Mobile
Application: Evidence from Econometric Models and Machine-Learning
Instruments [73.7716728492574]
We study the effectiveness of various recommendation strategies in the mobile channel and their impact on consumers' utility and demand levels for individual products.
We find significant differences in effectiveness among various recommendation strategies.
We develop novel econometric instruments that capture product differentiation (isolation) based on deep-learning models of user-generated reviews.
arXiv Detail & Related papers (2021-02-20T22:58:54Z) - ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine
Learning Models [64.03398193325572]
Inference attacks against Machine Learning (ML) models allow adversaries to learn about training data, model parameters, etc.
We concentrate on four attacks - namely, membership inference, model inversion, attribute inference, and model stealing.
Our analysis relies on a modular re-usable software, ML-Doctor, which enables ML model owners to assess the risks of deploying their models.
arXiv Detail & Related papers (2021-02-04T11:35:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.