Users are the North Star for AI Transparency
- URL: http://arxiv.org/abs/2303.05500v1
- Date: Thu, 9 Mar 2023 18:53:29 GMT
- Title: Users are the North Star for AI Transparency
- Authors: Alex Mei, Michael Saxon, Shiyu Chang, Zachary C. Lipton, William Yang
Wang
- Abstract summary: Despite widespread calls for transparent artificial intelligence systems, the term is too overburdened with disparate meanings to express precise policy aims or to orient concrete lines of research.
Part of why this happens is that a clear ideal of AI transparency goes unsaid in this body of work.
We explicitly name such a north star -- transparency that is user-centered, user-appropriate, and honest.
- Score: 111.5679109784322
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite widespread calls for transparent artificial intelligence systems, the
term is too overburdened with disparate meanings to express precise policy aims
or to orient concrete lines of research. Consequently, stakeholders often talk
past each other, with policymakers expressing vague demands and practitioners
devising solutions that may not address the underlying concerns. Part of why
this happens is that a clear ideal of AI transparency goes unsaid in this body
of work. We explicitly name such a north star -- transparency that is
user-centered, user-appropriate, and honest. We conduct a broad literature
survey, identifying many clusters of similar conceptions of transparency, tying
each back to our north star with analysis of how it furthers or hinders our
ideal AI transparency goals. We conclude with a discussion on common threads
across all the clusters, to provide clearer common language whereby
policymakers, stakeholders, and practitioners can communicate concrete demands
and deliver appropriate solutions. We hope for future work on AI transparency
that further advances confident, user-beneficial goals and provides clarity to
regulators and developers alike.
Related papers
- The Pitfalls of "Security by Obscurity" And What They Mean for Transparent AI [4.627627425427264]
We identify three key themes in the security community's perspective on the benefits of transparency.
We then provide a case study discussion on how transparency has shaped the research subfield of anonymization.
arXiv Detail & Related papers (2025-01-30T17:04:35Z) - Transparency, Security, and Workplace Training & Awareness in the Age of Generative AI [0.0]
As AI technologies advance, ethical considerations, transparency, data privacy, and their impact on human labor intersect with the drive for innovation and efficiency.
Our research explores publicly accessible large language models (LLMs) that often operate on the periphery, away from mainstream scrutiny.
Specifically, we examine Gab AI, a platform that centers around unrestricted communication and privacy, allowing users to interact freely without censorship.
arXiv Detail & Related papers (2024-12-19T17:40:58Z) - Enhancing transparency in AI-powered customer engagement [0.0]
This paper addresses the critical challenge of building consumer trust in AI-powered customer engagement.
Despite the potential of AI to revolutionise business operations, widespread concerns about misinformation and the opacity of AI decision-making processes hinder trust.
By adopting a holistic approach to transparency and explainability, businesses can cultivate trust in AI technologies.
arXiv Detail & Related papers (2024-09-13T20:26:11Z) - A Confidential Computing Transparency Framework for a Comprehensive Trust Chain [7.9699781371465965]
Confidential Computing enhances privacy of data in-use through hardware-based Trusted Execution Environments.
TEEs require user trust, as they cannot guarantee the absence of vulnerabilities or backdoors.
We propose a three-level conceptual framework providing organisations with a practical pathway to incrementally improve Confidential Computing transparency.
arXiv Detail & Related papers (2024-09-05T17:24:05Z) - Representation Engineering: A Top-Down Approach to AI Transparency [132.0398250233924]
We identify and characterize the emerging area of representation engineering (RepE)
RepE places population-level representations, rather than neurons or circuits, at the center of analysis.
We showcase how these methods can provide traction on a wide range of safety-relevant problems.
arXiv Detail & Related papers (2023-10-02T17:59:07Z) - AI Transparency in the Age of LLMs: A Human-Centered Research Roadmap [46.98582021477066]
The rise of powerful large language models (LLMs) brings about tremendous opportunities for innovation but also looming risks for individuals and society at large.
We have reached a pivotal moment for ensuring that LLMs and LLM-infused applications are developed and deployed responsibly.
It is paramount to pursue new approaches to provide transparency for LLMs.
arXiv Detail & Related papers (2023-06-02T22:51:26Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Think About the Stakeholders First! Towards an Algorithmic Transparency
Playbook for Regulatory Compliance [14.043062659347427]
Laws are being proposed and passed by governments around the world to regulate Artificial Intelligence (AI) systems implemented into the public and private sectors.
Many of these regulations address the transparency of AI systems, and related citizen-aware issues like allowing individuals to have the right to an explanation about how an AI system makes a decision that impacts them.
We propose a novel stakeholder-first approach that assists technologists in designing transparent, regulatory compliant systems.
arXiv Detail & Related papers (2022-06-10T09:39:00Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - Dimensions of Transparency in NLP Applications [64.16277166331298]
Broader transparency in descriptions of and communication regarding AI systems is widely considered desirable.
Previous work has suggested that a trade-off exists between greater system transparency and user confusion.
arXiv Detail & Related papers (2021-01-02T11:46:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.