EALM: Introducing Multidimensional Ethical Alignment in Conversational
Information Retrieval
- URL: http://arxiv.org/abs/2310.00970v1
- Date: Mon, 2 Oct 2023 08:22:34 GMT
- Title: EALM: Introducing Multidimensional Ethical Alignment in Conversational
Information Retrieval
- Authors: Yiyao Yu, Junjie Wang, Yuxiang Zhang, Lin Zhang, Yujiu Yang, Tetsuya
Sakai
- Abstract summary: We introduce a workflow that integrates ethical alignment with an initial ethical judgment stage for efficient data screening.
We present the QA-ETHICS dataset adapted from the ETHICS benchmark, which serves as an evaluation tool by unifying scenarios and label meanings.
In addition, we suggest a new approach that achieves top performance in both binary and multi-label ethical judgment tasks.
- Score: 43.72331337131317
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial intelligence (AI) technologies should adhere to human norms to
better serve our society and avoid disseminating harmful or misleading
information, particularly in Conversational Information Retrieval (CIR).
Previous work, including approaches and datasets, has not always been
successful or sufficiently robust in taking human norms into consideration. To
this end, we introduce a workflow that integrates ethical alignment, with an
initial ethical judgment stage for efficient data screening. To address the
need for ethical judgment in CIR, we present the QA-ETHICS dataset, adapted
from the ETHICS benchmark, which serves as an evaluation tool by unifying
scenarios and label meanings. However, each scenario only considers one ethical
concept. Therefore, we introduce the MP-ETHICS dataset to evaluate a scenario
under multiple ethical concepts, such as justice and Deontology. In addition,
we suggest a new approach that achieves top performance in both binary and
multi-label ethical judgment tasks. Our research provides a practical method
for introducing ethical alignment into the CIR workflow. The data and code are
available at https://github.com/wanng-ide/ealm .
Related papers
- Eagle: Ethical Dataset Given from Real Interactions [74.7319697510621]
We create datasets extracted from real interactions between ChatGPT and users that exhibit social biases, toxicity, and immoral problems.
Our experiments show that Eagle captures complementary aspects, not covered by existing datasets proposed for evaluation and mitigation of such ethical challenges.
arXiv Detail & Related papers (2024-02-22T03:46:02Z) - Unpacking the Ethical Value Alignment in Big Models [46.560886177083084]
This paper provides an overview of the risks and challenges associated with big models, surveys existing AI ethics guidelines, and examines the ethical implications arising from the limitations of these models.
We introduce a novel conceptual paradigm for aligning the ethical values of big models and discuss promising research directions for alignment criteria, evaluation, and method.
arXiv Detail & Related papers (2023-10-26T16:45:40Z) - Applying Standards to Advance Upstream & Downstream Ethics in Large
Language Models [0.0]
This paper explores how AI-owners can develop safeguards for AI-generated content.
It draws from established codes of conduct and ethical standards in other content-creation industries.
arXiv Detail & Related papers (2023-06-06T08:47:42Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - Achieving a Data-driven Risk Assessment Methodology for Ethical AI [3.523208537466128]
We show that a multidisciplinary research approach is the foundation of a pragmatic definition of ethical and societal risks faced by organizations using AI.
We propose a novel data-driven risk assessment methodology, entitled DRESS-eAI.
arXiv Detail & Related papers (2021-11-29T12:55:33Z) - Ethics Sheets for AI Tasks [25.289525325790414]
I will make a case for thinking about ethical considerations not just at the level of individual models and datasets, but also at the level of AI tasks.
I will present a new form of such an effort, Ethics Sheets for AI Tasks, dedicated to fleshing out the assumptions and ethical considerations hidden in how a task is commonly framed.
arXiv Detail & Related papers (2021-07-02T16:45:40Z) - An Ethical Highlighter for People-Centric Dataset Creation [62.886916477131486]
We propose an analytical framework to guide ethical evaluation of existing datasets and to serve future dataset creators in avoiding missteps.
Our work is informed by a review and analysis of prior works and highlights where such ethical challenges arise.
arXiv Detail & Related papers (2020-11-27T07:18:44Z) - Implementing AI Ethics in Practice: An Empirical Evaluation of the
RESOLVEDD Strategy [6.7298812735467095]
We empirically evaluate an existing method from the field of business ethics, the RESOLVEDD strategy, in the context of ethical system development.
One of our key findings is that, even though the use of the ethical method was forced upon the participants, its utilization nonetheless facilitated of ethical consideration in the projects.
arXiv Detail & Related papers (2020-04-21T17:58:53Z) - On the Morality of Artificial Intelligence [154.69452301122175]
We propose conceptual and practical principles and guidelines for Machine Learning research and deployment.
We insist on concrete actions that can be taken by practitioners to pursue a more ethical and moral practice of ML aimed at using AI for social good.
arXiv Detail & Related papers (2019-12-26T23:06:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.