Ethical AI Governance: Methods for Evaluating Trustworthy AI
- URL: http://arxiv.org/abs/2409.07473v1
- Date: Wed, 28 Aug 2024 09:25:50 GMT
- Title: Ethical AI Governance: Methods for Evaluating Trustworthy AI
- Authors: Louise McCormack, Malika Bendechache,
- Abstract summary: Trustworthy Artificial Intelligence (TAI) integrates ethics that align with human values.
TAI evaluation aims to ensure ethical standards and safety in AI development and usage.
- Score: 0.552480439325792
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Trustworthy Artificial Intelligence (TAI) integrates ethics that align with human values, looking at their influence on AI behaviour and decision-making. Primarily dependent on self-assessment, TAI evaluation aims to ensure ethical standards and safety in AI development and usage. This paper reviews the current TAI evaluation methods in the literature and offers a classification, contributing to understanding self-assessment methods in this field.
Related papers
- Where Assessment Validation and Responsible AI Meet [0.0876953078294908]
We propose a unified assessment framework that considers classical test validation theory and assessment-specific and domain-agnostic RAI principles and practice.
The framework addresses responsible AI use for assessment that supports validity arguments, alignment with AI ethics to maintain human values and oversight, and broader social responsibility associated with AI use.
arXiv Detail & Related papers (2024-11-04T20:20:29Z) - Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - The Rise of Artificial Intelligence in Educational Measurement: Opportunities and Ethical Challenges [2.569083526579529]
AI in education raises ethical concerns regarding validity, reliability, transparency, fairness, and equity.
Various stakeholders, including educators, policymakers, and organizations, have developed guidelines to ensure ethical AI use in education.
In this paper, a diverse group of AIME members examines the ethical implications of AI-powered tools in educational measurement.
arXiv Detail & Related papers (2024-06-27T05:28:40Z) - Testing autonomous vehicles and AI: perspectives and challenges from cybersecurity, transparency, robustness and fairness [53.91018508439669]
The study explores the complexities of integrating Artificial Intelligence into Autonomous Vehicles (AVs)
It examines the challenges introduced by AI components and the impact on testing procedures.
The paper identifies significant challenges and suggests future directions for research and development of AI in AV technology.
arXiv Detail & Related papers (2024-02-21T08:29:42Z) - Measuring Value Alignment [12.696227679697493]
This paper introduces a novel formalism to quantify the alignment between AI systems and human values.
By utilizing this formalism, AI developers and ethicists can better design and evaluate AI systems to ensure they operate in harmony with human values.
arXiv Detail & Related papers (2023-12-23T12:30:06Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - How to Assess Trustworthy AI in Practice [0.22740899647050103]
Z-Inspection$smallcircledR$ is a holistic process used to evaluate the trustworthiness of AI-based technologies.
Uses the general European Union's High-Level Expert Group's (EU HLEG) guidelines for trustworthy AI.
arXiv Detail & Related papers (2022-06-20T16:46:21Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - A Framework for Ethical AI at the United Nations [0.0]
This paper aims to provide an overview of the ethical concerns in artificial intelligence (AI) and the framework that is needed to mitigate those risks.
It suggests a practical path to ensure the development and use of AI at the United Nations (UN) aligns with our ethical values.
arXiv Detail & Related papers (2021-04-09T23:44:37Z) - On the Morality of Artificial Intelligence [154.69452301122175]
We propose conceptual and practical principles and guidelines for Machine Learning research and deployment.
We insist on concrete actions that can be taken by practitioners to pursue a more ethical and moral practice of ML aimed at using AI for social good.
arXiv Detail & Related papers (2019-12-26T23:06:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.