Through the Looking-Glass: Transparency Implications and Challenges in
Enterprise AI Knowledge Systems
- URL: http://arxiv.org/abs/2401.09410v1
- Date: Wed, 17 Jan 2024 18:47:30 GMT
- Title: Through the Looking-Glass: Transparency Implications and Challenges in
Enterprise AI Knowledge Systems
- Authors: Karina Corti\~nas-Lorenzo, Si\^an Lindley, Ida Larsen-Ledet and
Bhaskar Mitra
- Abstract summary: We present the looking-glass metaphor and use it to conceptualize AI knowledge systems as systems that reflect and distort.
We identify three transparency dimensions necessary to realize the value of AI knowledge systems, namely system transparency, procedural transparency and transparency of outcomes.
- Score: 3.7640559288894524
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge can't be disentangled from people. As AI knowledge systems mine
vast volumes of work-related data, the knowledge that's being extracted and
surfaced is intrinsically linked to the people who create and use it. When
these systems get embedded in organizational settings, the information that is
brought to the foreground and the information that's pushed to the periphery
can influence how individuals see each other and how they see themselves at
work. In this paper, we present the looking-glass metaphor and use it to
conceptualize AI knowledge systems as systems that reflect and distort,
expanding our view on transparency requirements, implications and challenges.
We formulate transparency as a key mediator in shaping different ways of
seeing, including seeing into the system, which unveils its capabilities,
limitations and behavior, and seeing through the system, which shapes workers'
perceptions of their own contributions and others within the organization.
Recognizing the sociotechnical nature of these systems, we identify three
transparency dimensions necessary to realize the value of AI knowledge systems,
namely system transparency, procedural transparency and transparency of
outcomes. We discuss key challenges hindering the implementation of these forms
of transparency, bringing to light the wider sociotechnical gap and
highlighting directions for future Computer-supported Cooperative Work (CSCW)
research.
Related papers
- AI data transparency: an exploration through the lens of AI incidents [2.255682336735152]
This research explores the status of public documentation about data practices within AI systems generating public concern.
We highlight a need to develop systematic ways of monitoring AI data transparency that account for the diversity of AI system types.
arXiv Detail & Related papers (2024-09-05T07:23:30Z) - Visual Knowledge in the Big Model Era: Retrospect and Prospect [63.282425615863]
Visual knowledge is a new form of knowledge representation that can encapsulate visual concepts and their relations in a succinct, comprehensive, and interpretable manner.
As the knowledge about the visual world has been identified as an indispensable component of human cognition and intelligence, visual knowledge is poised to have a pivotal role in establishing machine intelligence.
arXiv Detail & Related papers (2024-04-05T07:31:24Z) - Path To Gain Functional Transparency In Artificial Intelligence With
Meaningful Explainability [0.0]
As AI systems become increasingly sophisticated, ensuring their transparency and explainability becomes crucial.
We propose a design for user-centered compliant-by-design transparency in transparent systems.
By providing a comprehensive understanding of the challenges associated with transparency in AI systems, we aim to facilitate the development of AI systems that are accountable, trustworthy, and aligned with societal values.
arXiv Detail & Related papers (2023-10-13T04:25:30Z) - Representation Engineering: A Top-Down Approach to AI Transparency [132.0398250233924]
We identify and characterize the emerging area of representation engineering (RepE)
RepE places population-level representations, rather than neurons or circuits, at the center of analysis.
We showcase how these methods can provide traction on a wide range of safety-relevant problems.
arXiv Detail & Related papers (2023-10-02T17:59:07Z) - Users are the North Star for AI Transparency [111.5679109784322]
Despite widespread calls for transparent artificial intelligence systems, the term is too overburdened with disparate meanings to express precise policy aims or to orient concrete lines of research.
Part of why this happens is that a clear ideal of AI transparency goes unsaid in this body of work.
We explicitly name such a north star -- transparency that is user-centered, user-appropriate, and honest.
arXiv Detail & Related papers (2023-03-09T18:53:29Z) - Painting the black box white: experimental findings from applying XAI to
an ECG reading setting [0.13124513975412253]
The shift from symbolic AI systems to black-box, sub-symbolic, and statistical ones has motivated a rapid increase in the interest toward explainable AI (XAI)
We focus on the cognitive dimension of users' perception of explanations and XAI systems.
arXiv Detail & Related papers (2022-10-27T07:47:50Z) - Think About the Stakeholders First! Towards an Algorithmic Transparency
Playbook for Regulatory Compliance [14.043062659347427]
Laws are being proposed and passed by governments around the world to regulate Artificial Intelligence (AI) systems implemented into the public and private sectors.
Many of these regulations address the transparency of AI systems, and related citizen-aware issues like allowing individuals to have the right to an explanation about how an AI system makes a decision that impacts them.
We propose a novel stakeholder-first approach that assists technologists in designing transparent, regulatory compliant systems.
arXiv Detail & Related papers (2022-06-10T09:39:00Z) - Adaptive cognitive fit: Artificial intelligence augmented management of
information facets and representations [62.997667081978825]
Explosive growth in big data technologies and artificial intelligence [AI] applications have led to increasing pervasiveness of information facets.
Information facets, such as equivocality and veracity, can dominate and significantly influence human perceptions of information.
We suggest that artificially intelligent technologies that can adapt information representations to overcome cognitive limitations are necessary.
arXiv Detail & Related papers (2022-04-25T02:47:25Z) - Dimensions of Transparency in NLP Applications [64.16277166331298]
Broader transparency in descriptions of and communication regarding AI systems is widely considered desirable.
Previous work has suggested that a trade-off exists between greater system transparency and user confusion.
arXiv Detail & Related papers (2021-01-02T11:46:17Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z) - Applying Transparency in Artificial Intelligence based Personalization
Systems [5.671950073691286]
Increasing transparency is an important goal for personalization based systems.
We combine insights from technology ethics and computer science to generate a list of transparency best practices for machine generated personalization.
Based on these best practices, we develop a checklist to be used by designers wishing to evaluate and increase the transparency of their algorithmic systems.
arXiv Detail & Related papers (2020-04-02T11:07:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.