Ethical Concern Identification in NLP: A Corpus of ACL Anthology Ethics Statements
- URL: http://arxiv.org/abs/2411.07845v1
- Date: Tue, 12 Nov 2024 14:53:12 GMT
- Title: Ethical Concern Identification in NLP: A Corpus of ACL Anthology Ethics Statements
- Authors: Antonia Karamolegkou, Sandrine Schiller Hansen, Ariadni Christopoulou, Filippos Stamatiou, Anne Lauscher, Anders Søgaard,
- Abstract summary: We introduce EthiCon, a corpus of 1,580 ethical concern statements extracted from scientific papers published in the ACL Anthology.
Through a survey, we compare the ethical concerns of the corpus to the concerns listed by the general public and professionals in the field.
- Score: 40.9861355779629
- License:
- Abstract: What ethical concerns, if any, do LLM researchers have? We introduce EthiCon, a corpus of 1,580 ethical concern statements extracted from scientific papers published in the ACL Anthology. We extract ethical concern keywords from the statements and show promising results in automating the concern identification process. Through a survey, we compare the ethical concerns of the corpus to the concerns listed by the general public and professionals in the field. Finally, we compare our retrieved ethical concerns with existing taxonomies pointing to gaps and future research directions.
Related papers
- The ethical landscape of robot-assisted surgery. A systematic review [0.0]
ethical issues of robot-assisted surgery have received less attention.
Seven major strands of the ethical debate emerged during analysis.
These include questions of harms and benefits, responsibility and control, professional-patient relationship, ethical issues in surgical training and learning, justice, translational questions, and economic considerations.
arXiv Detail & Related papers (2024-11-18T15:15:24Z) - Ethics Whitepaper: Whitepaper on Ethical Research into Large Language Models [53.316174782223115]
This whitepaper offers an overview of the ethical considerations surrounding research into or with large language models (LLMs)
As LLMs become more integrated into widely used applications, their societal impact increases, bringing important ethical questions to the forefront.
arXiv Detail & Related papers (2024-10-17T18:36:02Z) - Quelle {é}thique pour quelle IA ? [0.0]
This study proposes an analysis of the different types of ethical approaches involved in the ethics of AI.
The author introduces to the contemporary need for and meaning of ethics, distinguishes it from other registers of normativities and underlines its inadequacy to formalization.
The study concludes with a reflection on the reasons why a human ethics of AI based on a pragmatic practice of contextual ethics remains necessary and irreducible to any formalization or automated treatment of the ethical questions that arise for humans.
arXiv Detail & Related papers (2024-05-21T08:13:02Z) - Eagle: Ethical Dataset Given from Real Interactions [74.7319697510621]
We create datasets extracted from real interactions between ChatGPT and users that exhibit social biases, toxicity, and immoral problems.
Our experiments show that Eagle captures complementary aspects, not covered by existing datasets proposed for evaluation and mitigation of such ethical challenges.
arXiv Detail & Related papers (2024-02-22T03:46:02Z) - EALM: Introducing Multidimensional Ethical Alignment in Conversational
Information Retrieval [43.72331337131317]
We introduce a workflow that integrates ethical alignment with an initial ethical judgment stage for efficient data screening.
We present the QA-ETHICS dataset adapted from the ETHICS benchmark, which serves as an evaluation tool by unifying scenarios and label meanings.
In addition, we suggest a new approach that achieves top performance in both binary and multi-label ethical judgment tasks.
arXiv Detail & Related papers (2023-10-02T08:22:34Z) - Towards A Unified Utilitarian Ethics Framework for Healthcare Artificial
Intelligence [0.08192907805418582]
This study attempts to identify the major ethical principles influencing the utility performance of AI at different technological levels.
Justice, privacy, bias, lack of regulations, risks, and interpretability are the most important principles to consider for ethical AI.
We propose a new utilitarian ethics-based theoretical framework for designing ethical AI for the healthcare domain.
arXiv Detail & Related papers (2023-09-26T02:10:58Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - Use of Formal Ethical Reviews in NLP Literature: Historical Trends and
Current Practices [6.195761193461355]
Ethical aspects of research in language technologies have received much attention recently.
It is a standard practice to get a study involving human subjects reviewed and approved by a professional ethics committee/board of the institution.
With the rising concerns and discourse around the ethics of NLP, do we also observe a rise in formal ethical reviews of NLP studies?
arXiv Detail & Related papers (2021-06-02T12:12:59Z) - Case Study: Deontological Ethics in NLP [119.53038547411062]
We study one ethical theory, namely deontological ethics, from the perspective of NLP.
In particular, we focus on the generalization principle and the respect for autonomy through informed consent.
We provide four case studies to demonstrate how these principles can be used with NLP systems.
arXiv Detail & Related papers (2020-10-09T16:04:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.