Signs for Ethical AI: A Route Towards Transparency
- URL: http://arxiv.org/abs/2009.13871v2
- Date: Mon, 9 May 2022 13:52:35 GMT
- Title: Signs for Ethical AI: A Route Towards Transparency
- Authors: Dario Garcia-Gasulla, Atia Cort\'es, Sergio Alvarez-Napagao, Ulises
Cort\'es
- Abstract summary: We propose a transparency scheme to be implemented on any AI system open to the public.
The first recognizes the relevance of data for AI, and is supported by Privacy.
The second considers aspects of AI transparency currently unregulated: AI capabilities, purpose and source.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Today, Artificial Intelligence (AI) has a direct impact on the daily life of
billions of people. Being applied to sectors like finance, health, security and
advertisement, AI fuels some of the biggest companies and research institutions
in the world. Its impact in the near future seems difficult to predict or
bound. In contrast to all this power, society remains mostly ignorant of the
capabilities and standard practices of AI today. To address this imbalance,
improving current interactions between people and AI systems, we propose a
transparency scheme to be implemented on any AI system open to the public. The
scheme is based on two pillars: Data Privacy and AI Transparency. The first
recognizes the relevance of data for AI, and is supported by GDPR. The second
considers aspects of AI transparency currently unregulated: AI capabilities,
purpose and source. We design this pillar based on ethical principles. For each
of the two pillars, we define a three-level display. The first level is based
on visual signs, inspired by traffic signs managing the interaction between
people and cars, and designed for quick and universal interpretability. The
second level uses factsheets, providing limited details. The last level
provides access to all available information. After detailing and exemplifying
the proposed transparency scheme, we define a set of principles for creating
transparent by design software, to be used during the integration of AI
components on user-oriented services.
Related papers
- OML: Open, Monetizable, and Loyal AI [39.63122342758896]
We propose OML, which stands for Open, Monetizable, and Loyal AI.
OML is an approach designed to democratize AI development.
Key innovation of our work is introducing a new scientific field: AI-native cryptography.
arXiv Detail & Related papers (2024-11-01T18:46:03Z) - Participatory Approaches in AI Development and Governance: A Principled Approach [9.271573427680087]
This paper forms the first part of a two-part series on participatory governance in AI.
It advances the premise that a participatory approach is beneficial to building and using more responsible, safe, and human-centric AI systems.
arXiv Detail & Related papers (2024-06-03T09:49:42Z) - Trust, Accountability, and Autonomy in Knowledge Graph-based AI for
Self-determination [1.4305544869388402]
Knowledge Graphs (KGs) have emerged as fundamental platforms for powering intelligent decision-making.
The integration of KGs with neuronal learning is currently a topic of active research.
This paper conceptualises the foundational topics and research pillars to support KG-based AI for self-determination.
arXiv Detail & Related papers (2023-10-30T12:51:52Z) - Users are the North Star for AI Transparency [111.5679109784322]
Despite widespread calls for transparent artificial intelligence systems, the term is too overburdened with disparate meanings to express precise policy aims or to orient concrete lines of research.
Part of why this happens is that a clear ideal of AI transparency goes unsaid in this body of work.
We explicitly name such a north star -- transparency that is user-centered, user-appropriate, and honest.
arXiv Detail & Related papers (2023-03-09T18:53:29Z) - FATE in AI: Towards Algorithmic Inclusivity and Accessibility [0.0]
To prevent algorithmic disparities, fairness, accountability, transparency, and ethics (FATE) in AI are being implemented.
This study examines FATE-related desiderata, particularly transparency and ethics, in areas of the global South that are underserved by AI.
To promote inclusivity, a community-led strategy is proposed to collect and curate representative data for responsible AI design.
arXiv Detail & Related papers (2023-01-03T15:08:10Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Think About the Stakeholders First! Towards an Algorithmic Transparency
Playbook for Regulatory Compliance [14.043062659347427]
Laws are being proposed and passed by governments around the world to regulate Artificial Intelligence (AI) systems implemented into the public and private sectors.
Many of these regulations address the transparency of AI systems, and related citizen-aware issues like allowing individuals to have the right to an explanation about how an AI system makes a decision that impacts them.
We propose a novel stakeholder-first approach that assists technologists in designing transparent, regulatory compliant systems.
arXiv Detail & Related papers (2022-06-10T09:39:00Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Empowering Things with Intelligence: A Survey of the Progress,
Challenges, and Opportunities in Artificial Intelligence of Things [98.10037444792444]
We show how AI can empower the IoT to make it faster, smarter, greener, and safer.
First, we present progress in AI research for IoT from four perspectives: perceiving, learning, reasoning, and behaving.
Finally, we summarize some promising applications of AIoT that are likely to profoundly reshape our world.
arXiv Detail & Related papers (2020-11-17T13:14:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.