Responsible Artificial Intelligence -- from Principles to Practice
- URL: http://arxiv.org/abs/2205.10785v1
- Date: Sun, 22 May 2022 09:28:54 GMT
- Title: Responsible Artificial Intelligence -- from Principles to Practice
- Authors: Virginia Dignum
- Abstract summary: AI is changing the way we work, live and solve challenges.
But concerns about fairness, transparency or privacy are also growing.
Ensuring responsible, ethical AI is more than designing systems whose result can be trusted.
- Score: 5.5586788751870175
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The impact of Artificial Intelligence does not depend only on fundamental
research and technological developments, but for a large part on how these
systems are introduced into society and used in everyday situations. AI is
changing the way we work, live and solve challenges but concerns about
fairness, transparency or privacy are also growing. Ensuring responsible,
ethical AI is more than designing systems whose result can be trusted. It is
about the way we design them, why we design them, and who is involved in
designing them. In order to develop and use AI responsibly, we need to work
towards technical, societal, institutional and legal methods and tools which
provide concrete support to AI practitioners, as well as awareness and training
to enable participation of all, to ensure the alignment of AI systems with our
societies' principles and values.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - A Review of the Ethics of Artificial Intelligence and its Applications
in the United States [0.0]
The paper highlights the impact AI has in every sector of the US economy and the resultant effect on entities spanning businesses, government, academia, and civil society.
Our discussion explores eleven fundamental 'ethical principles' structured as overarching themes.
These encompass Transparency, Justice, Fairness, Equity, Non- Maleficence, Responsibility, Accountability, Privacy, Beneficence, Freedom, Autonomy, Trust, Dignity, Sustainability, and Solidarity.
arXiv Detail & Related papers (2023-10-09T14:29:00Z) - Inherent Limitations of AI Fairness [16.588468396705366]
The study of AI fairness has rapidly developed into a rich field of research with links to computer science, social science, law, and philosophy.
Many technical solutions for measuring and achieving AI fairness have been proposed, yet their approach has been criticized in recent years for being misleading, unrealistic and harmful.
arXiv Detail & Related papers (2022-12-13T11:23:24Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Software Engineering for Responsible AI: An Empirical Study and
Operationalised Patterns [20.747681252352464]
We propose a template that enables AI ethics principles to be operationalised in the form of concrete patterns.
These patterns provide concrete, operationalised guidance that facilitate the development of responsible AI systems.
arXiv Detail & Related papers (2021-11-18T02:18:27Z) - Trustworthy AI: From Principles to Practices [44.67324097900778]
Many current AI systems were found vulnerable to imperceptible attacks, biased against underrepresented groups, lacking in user privacy protection, etc.
In this review, we strive to provide AI practitioners a comprehensive guide towards building trustworthy AI systems.
To unify the current fragmented approaches towards trustworthy AI, we propose a systematic approach that considers the entire lifecycle of AI systems.
arXiv Detail & Related papers (2021-10-04T03:20:39Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Socially Responsible AI Algorithms: Issues, Purposes, and Challenges [31.382000425295885]
Technologists and AI researchers have a responsibility to develop trustworthy AI systems.
To build long-lasting trust between AI and human beings, we argue that the key is to think beyond algorithmic fairness.
arXiv Detail & Related papers (2021-01-01T17:34:42Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.