AI Ethics Statements -- Analysis and lessons learnt from NeurIPS Broader
Impact Statements
- URL: http://arxiv.org/abs/2111.01705v1
- Date: Tue, 2 Nov 2021 16:17:12 GMT
- Title: AI Ethics Statements -- Analysis and lessons learnt from NeurIPS Broader
Impact Statements
- Authors: Carolyn Ashurst, Emmie Hine, Paul Sedille, Alexis Carlier
- Abstract summary: In 2020, the machine learning (ML) conference NeurIPS broke new ground by requiring that all papers include a broader impact statement.
This requirement was removed in 2021, in favour of a checklist approach.
We have created a dataset containing the impact statements from all NeurIPS 2020 papers.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Ethics statements have been proposed as a mechanism to increase transparency
and promote reflection on the societal impacts of published research. In 2020,
the machine learning (ML) conference NeurIPS broke new ground by requiring that
all papers include a broader impact statement. This requirement was removed in
2021, in favour of a checklist approach. The 2020 statements therefore provide
a unique opportunity to learn from the broader impact experiment: to
investigate the benefits and challenges of this and similar governance
mechanisms, as well as providing an insight into how ML researchers think about
the societal impacts of their own work. Such learning is needed as NeurIPS and
other venues continue to question and adapt their policies. To enable this, we
have created a dataset containing the impact statements from all NeurIPS 2020
papers, along with additional information such as affiliation type, location
and subject area, and a simple visualisation tool for exploration. We also
provide an initial quantitative analysis of the dataset, covering
representation, engagement, common themes, and willingness to discuss potential
harms alongside benefits. We investigate how these vary by geography,
affiliation type and subject area. Drawing on these findings, we discuss the
potential benefits and negative outcomes of ethics statement requirements, and
their possible causes and associated challenges. These lead us to several
lessons to be learnt from the 2020 requirement: (i) the importance of creating
the right incentives, (ii) the need for clear expectations and guidance, and
(iii) the importance of transparency and constructive deliberation. We
encourage other researchers to use our dataset to provide additional analysis,
to further our understanding of how researchers responded to this requirement,
and to investigate the benefits and challenges of this and related mechanisms.
Related papers
- Persuasion with Large Language Models: a Survey [49.86930318312291]
Large Language Models (LLMs) have created new disruptive possibilities for persuasive communication.
In areas such as politics, marketing, public health, e-commerce, and charitable giving, such LLM Systems have already achieved human-level or even super-human persuasiveness.
Our survey suggests that the current and future potential of LLM-based persuasion poses profound ethical and societal risks.
arXiv Detail & Related papers (2024-11-11T10:05:52Z) - A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and Law [65.87885628115946]
Large language models (LLMs) are revolutionizing the landscapes of finance, healthcare, and law.
We highlight the instrumental role of LLMs in enhancing diagnostic and treatment methodologies in healthcare, innovating financial analytics, and refining legal interpretation and compliance strategies.
We critically examine the ethics for LLM applications in these fields, pointing out the existing ethical concerns and the need for transparent, fair, and robust AI systems.
arXiv Detail & Related papers (2024-05-02T22:43:02Z) - Lazy Data Practices Harm Fairness Research [49.02318458244464]
We present a comprehensive analysis of fair ML datasets, demonstrating how unreflective practices hinder the reach and reliability of algorithmic fairness findings.
Our analyses identify three main areas of concern: (1) a textbflack of representation for certain protected attributes in both data and evaluations; (2) the widespread textbf of minorities during data preprocessing; and (3) textbfopaque data processing threatening the generalization of fairness research.
This study underscores the need for a critical reevaluation of data practices in fair ML and offers directions to improve both the sourcing and usage of datasets.
arXiv Detail & Related papers (2024-04-26T09:51:24Z) - Responsible AI Considerations in Text Summarization Research: A Review
of Current Practices [89.85174013619883]
We focus on text summarization, a common NLP task largely overlooked by the responsible AI community.
We conduct a multi-round qualitative analysis of 333 summarization papers from the ACL Anthology published between 2020-2022.
We focus on how, which, and when responsible AI issues are covered, which relevant stakeholders are considered, and mismatches between stated and realized research goals.
arXiv Detail & Related papers (2023-11-18T15:35:36Z) - Anticipating Impacts: Using Large-Scale Scenario Writing to Explore
Diverse Implications of Generative AI in the News Environment [3.660182910533372]
We aim to broaden the perspective and capture the expectations of three stakeholder groups about the potential negative impacts of generative AI.
We apply scenario writing and use participatory foresight to delve into cognitively diverse imaginations of the future.
We conclude by discussing the usefulness of scenario-writing and participatory foresight as a toolbox for generative AI impact assessment.
arXiv Detail & Related papers (2023-10-10T06:59:27Z) - Information Retrieval Meets Large Language Models: A Strategic Report
from Chinese IR Community [180.28262433004113]
Large Language Models (LLMs) have demonstrated exceptional capabilities in text understanding, generation, and knowledge inference.
LLMs and humans form a new technical paradigm that is more powerful for information seeking.
To thoroughly discuss the transformative impact of LLMs on IR research, the Chinese IR community conducted a strategic workshop in April 2023.
arXiv Detail & Related papers (2023-07-19T05:23:43Z) - Fairness in Recommender Systems: Research Landscape and Future
Directions [119.67643184567623]
We review the concepts and notions of fairness that were put forward in the area in the recent past.
We present an overview of how research in this field is currently operationalized.
Overall, our analysis of recent works points to certain research gaps.
arXiv Detail & Related papers (2022-05-23T08:34:25Z) - Proposing an Interactive Audit Pipeline for Visual Privacy Research [0.0]
We argue for the use of fairness to discover bias and fairness issues in systems, assert the need for a responsible human-over-the-loop, and reflect on the need to explore research agendas that have harmful societal impacts.
Our goal is to provide a systematic analysis of the machine learning pipeline for visual privacy and bias issues.
arXiv Detail & Related papers (2021-11-07T01:51:43Z) - Institutionalising Ethics in AI through Broader Impact Requirements [8.793651996676095]
We reflect on a novel governance initiative by one of the world's largest AI conferences.
NeurIPS introduced a requirement for submitting authors to include a statement on the broader societal impacts of their research.
We investigate the risks, challenges and potential benefits of such an initiative.
arXiv Detail & Related papers (2021-05-30T12:36:43Z) - Unpacking the Expressed Consequences of AI Research in Broader Impact
Statements [23.3030110636071]
We present the results of a thematic analysis of a sample of statements written for the 2020 Neural Information Processing Systems conference.
The themes we identify fall into categories related to how consequences are expressed and areas of impacts expressed.
In light of our results, we offer perspectives on how the broader impact statement can be implemented in future iterations to better align with potential goals.
arXiv Detail & Related papers (2021-05-11T02:57:39Z) - Like a Researcher Stating Broader Impact For the Very First Time [3.2634122554914]
This paper seeks to answer the question of how individual researchers reacted to the new requirement.
We present survey results and considerations to inform the next iteration of the broader impact requirement should it remain a requirement for future NeurIPS conferences.
arXiv Detail & Related papers (2020-11-25T21:32:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.