Unpacking the Expressed Consequences of AI Research in Broader Impact
Statements
- URL: http://arxiv.org/abs/2105.04760v2
- Date: Sat, 22 May 2021 21:36:14 GMT
- Title: Unpacking the Expressed Consequences of AI Research in Broader Impact
Statements
- Authors: Priyanka Nanayakkara, Jessica Hullman, Nicholas Diakopoulos
- Abstract summary: We present the results of a thematic analysis of a sample of statements written for the 2020 Neural Information Processing Systems conference.
The themes we identify fall into categories related to how consequences are expressed and areas of impacts expressed.
In light of our results, we offer perspectives on how the broader impact statement can be implemented in future iterations to better align with potential goals.
- Score: 23.3030110636071
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The computer science research community and the broader public have become
increasingly aware of negative consequences of algorithmic systems. In
response, the top-tier Neural Information Processing Systems (NeurIPS)
conference for machine learning and artificial intelligence research required
that authors include a statement of broader impact to reflect on potential
positive and negative consequences of their work. We present the results of a
qualitative thematic analysis of a sample of statements written for the 2020
conference. The themes we identify broadly fall into categories related to how
consequences are expressed (e.g., valence, specificity, uncertainty), areas of
impacts expressed (e.g., bias, the environment, labor, privacy), and
researchers' recommendations for mitigating negative consequences in the
future. In light of our results, we offer perspectives on how the broader
impact statement can be implemented in future iterations to better align with
potential goals.
Related papers
- The Impact of Human Aspects on the Interactions Between Software Developers and End-Users in Software Engineering: A Systematic Literature Review [10.307654003138401]
We present a systematic review of studies on human aspects affecting developer-user interactions.
We identified various human aspects affecting developer-user interactions in 46 studies.
Our findings suggest the importance of leveraging positive effects and addressing negative effects in developer-user interactions.
arXiv Detail & Related papers (2024-05-08T03:38:36Z) - Reduced-Rank Multi-objective Policy Learning and Optimization [57.978477569678844]
In practice, causal researchers do not have a single outcome in mind a priori.
In government-assisted social benefit programs, policymakers collect many outcomes to understand the multidimensional nature of poverty.
We present a data-driven dimensionality-reduction methodology for multiple outcomes in the context of optimal policy learning.
arXiv Detail & Related papers (2024-04-29T08:16:30Z) - The Social Impact of Generative AI: An Analysis on ChatGPT [0.7401425472034117]
The rapid development of Generative AI models has sparked heated discussions regarding their benefits, limitations, and associated risks.
Generative models hold immense promise across multiple domains, such as healthcare, finance, and education, to cite a few.
This paper adopts a methodology to delve into the societal implications of Generative AI tools, focusing primarily on the case of ChatGPT.
arXiv Detail & Related papers (2024-03-07T17:14:22Z) - Anticipating Impacts: Using Large-Scale Scenario Writing to Explore
Diverse Implications of Generative AI in the News Environment [3.660182910533372]
We aim to broaden the perspective and capture the expectations of three stakeholder groups about the potential negative impacts of generative AI.
We apply scenario writing and use participatory foresight to delve into cognitively diverse imaginations of the future.
We conclude by discussing the usefulness of scenario-writing and participatory foresight as a toolbox for generative AI impact assessment.
arXiv Detail & Related papers (2023-10-10T06:59:27Z) - Predictable Artificial Intelligence [67.79118050651908]
We argue that achieving predictability is crucial for fostering trust, liability, control, alignment and safety of AI ecosystems.
This paper aims to elucidate the questions, hypotheses and challenges relevant to Predictable AI.
arXiv Detail & Related papers (2023-10-09T21:36:21Z) - Eliciting the Double-edged Impact of Digitalisation: a Case Study in
Rural Areas [1.8707139489039097]
This paper reports a case study about the impact of digitalisation in remote mountain areas, in the context of a system for ordinary land management and hydro-geological risk control.
We highlight the higher stress due to the excess of connectivity, the partial reduction of decision-making abilities, and the risk of marginalisation for certain types of stakeholders.
Our study contributes to the literature with: a set of impacts specific to the case, which can apply to similar contexts; an effective approach for impact elicitation; and a list of lessons learned from the experience.
arXiv Detail & Related papers (2023-06-08T10:01:35Z) - Fairness in Recommender Systems: Research Landscape and Future
Directions [119.67643184567623]
We review the concepts and notions of fairness that were put forward in the area in the recent past.
We present an overview of how research in this field is currently operationalized.
Overall, our analysis of recent works points to certain research gaps.
arXiv Detail & Related papers (2022-05-23T08:34:25Z) - Towards Unbiased Visual Emotion Recognition via Causal Intervention [63.74095927462]
We propose a novel Emotion Recognition Network (IERN) to alleviate the negative effects brought by the dataset bias.
A series of designed tests validate the effectiveness of IERN, and experiments on three emotion benchmarks demonstrate that IERN outperforms other state-of-the-art approaches.
arXiv Detail & Related papers (2021-07-26T10:40:59Z) - Heterogeneous Demand Effects of Recommendation Strategies in a Mobile
Application: Evidence from Econometric Models and Machine-Learning
Instruments [73.7716728492574]
We study the effectiveness of various recommendation strategies in the mobile channel and their impact on consumers' utility and demand levels for individual products.
We find significant differences in effectiveness among various recommendation strategies.
We develop novel econometric instruments that capture product differentiation (isolation) based on deep-learning models of user-generated reviews.
arXiv Detail & Related papers (2021-02-20T22:58:54Z) - Overcoming Failures of Imagination in AI Infused System Development and
Deployment [71.9309995623067]
NeurIPS 2020 requested that research paper submissions include impact statements on "potential nefarious uses and the consequences of failure"
We argue that frameworks of harms must be context-aware and consider a wider range of potential stakeholders, system affordances, as well as viable proxies for assessing harms in the widest sense.
arXiv Detail & Related papers (2020-11-26T18:09:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.