Path To Gain Functional Transparency In Artificial Intelligence With
Meaningful Explainability
- URL: http://arxiv.org/abs/2310.08849v1
- Date: Fri, 13 Oct 2023 04:25:30 GMT
- Title: Path To Gain Functional Transparency In Artificial Intelligence With
Meaningful Explainability
- Authors: Md. Tanzib Hosain, Mehedi Hasan Anik, Sadman Rafi, Rana Tabassum,
Khaleque Insia, Md. Mehrab Siddiky
- Abstract summary: As AI systems become increasingly sophisticated, ensuring their transparency and explainability becomes crucial.
We propose a design for user-centered compliant-by-design transparency in transparent systems.
By providing a comprehensive understanding of the challenges associated with transparency in AI systems, we aim to facilitate the development of AI systems that are accountable, trustworthy, and aligned with societal values.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial Intelligence (AI) is rapidly integrating into various aspects of
our daily lives, influencing decision-making processes in areas such as
targeted advertising and matchmaking algorithms. As AI systems become
increasingly sophisticated, ensuring their transparency and explainability
becomes crucial. Functional transparency is a fundamental aspect of algorithmic
decision-making systems, allowing stakeholders to comprehend the inner workings
of these systems and enabling them to evaluate their fairness and accuracy.
However, achieving functional transparency poses significant challenges that
need to be addressed. In this paper, we propose a design for user-centered
compliant-by-design transparency in transparent systems. We emphasize that the
development of transparent and explainable AI systems is a complex and
multidisciplinary endeavor, necessitating collaboration among researchers from
diverse fields such as computer science, artificial intelligence, ethics, law,
and social science. By providing a comprehensive understanding of the
challenges associated with transparency in AI systems and proposing a
user-centered design framework, we aim to facilitate the development of AI
systems that are accountable, trustworthy, and aligned with societal values.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Enhancing transparency in AI-powered customer engagement [0.0]
This paper addresses the critical challenge of building consumer trust in AI-powered customer engagement.
Despite the potential of AI to revolutionise business operations, widespread concerns about misinformation and the opacity of AI decision-making processes hinder trust.
By adopting a holistic approach to transparency and explainability, businesses can cultivate trust in AI technologies.
arXiv Detail & Related papers (2024-09-13T20:26:11Z) - AI data transparency: an exploration through the lens of AI incidents [2.255682336735152]
This research explores the status of public documentation about data practices within AI systems generating public concern.
We highlight a need to develop systematic ways of monitoring AI data transparency that account for the diversity of AI system types.
arXiv Detail & Related papers (2024-09-05T07:23:30Z) - Through the Looking-Glass: Transparency Implications and Challenges in
Enterprise AI Knowledge Systems [3.7640559288894524]
We present the looking-glass metaphor and use it to conceptualize AI knowledge systems as systems that reflect and distort.
We identify three transparency dimensions necessary to realize the value of AI knowledge systems, namely system transparency, procedural transparency and transparency of outcomes.
arXiv Detail & Related papers (2024-01-17T18:47:30Z) - Representation Engineering: A Top-Down Approach to AI Transparency [132.0398250233924]
We identify and characterize the emerging area of representation engineering (RepE)
RepE places population-level representations, rather than neurons or circuits, at the center of analysis.
We showcase how these methods can provide traction on a wide range of safety-relevant problems.
arXiv Detail & Related papers (2023-10-02T17:59:07Z) - Users are the North Star for AI Transparency [111.5679109784322]
Despite widespread calls for transparent artificial intelligence systems, the term is too overburdened with disparate meanings to express precise policy aims or to orient concrete lines of research.
Part of why this happens is that a clear ideal of AI transparency goes unsaid in this body of work.
We explicitly name such a north star -- transparency that is user-centered, user-appropriate, and honest.
arXiv Detail & Related papers (2023-03-09T18:53:29Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Think About the Stakeholders First! Towards an Algorithmic Transparency
Playbook for Regulatory Compliance [14.043062659347427]
Laws are being proposed and passed by governments around the world to regulate Artificial Intelligence (AI) systems implemented into the public and private sectors.
Many of these regulations address the transparency of AI systems, and related citizen-aware issues like allowing individuals to have the right to an explanation about how an AI system makes a decision that impacts them.
We propose a novel stakeholder-first approach that assists technologists in designing transparent, regulatory compliant systems.
arXiv Detail & Related papers (2022-06-10T09:39:00Z) - A Transparency Index Framework for AI in Education [1.776308321589895]
The main contribution of this study is that it highlights the importance of transparency in developing AI-powered educational technologies.
We demonstrate how transparency enables the implementation of other ethical AI dimensions in Education like interpretability, accountability, and safety.
arXiv Detail & Related papers (2022-05-09T10:10:47Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Distributed and Democratized Learning: Philosophy and Research
Challenges [80.39805582015133]
We propose a novel design philosophy called democratized learning (Dem-AI)
Inspired by the societal groups of humans, the specialized groups of learning agents in the proposed Dem-AI system are self-organized in a hierarchical structure to collectively perform learning tasks more efficiently.
We present a reference design as a guideline to realize future Dem-AI systems, inspired by various interdisciplinary fields.
arXiv Detail & Related papers (2020-03-18T08:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.