Applying Transparency in Artificial Intelligence based Personalization
Systems
- URL: http://arxiv.org/abs/2004.00935v2
- Date: Fri, 21 Aug 2020 13:49:54 GMT
- Title: Applying Transparency in Artificial Intelligence based Personalization
Systems
- Authors: Laura Schelenz, Avi Segal, and Kobi Gal
- Abstract summary: Increasing transparency is an important goal for personalization based systems.
We combine insights from technology ethics and computer science to generate a list of transparency best practices for machine generated personalization.
Based on these best practices, we develop a checklist to be used by designers wishing to evaluate and increase the transparency of their algorithmic systems.
- Score: 5.671950073691286
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial Intelligence based systems increasingly use personalization to
provide users with relevant content, products, and solutions. Personalization
is intended to support users and address their respective needs and
preferences. However, users are becoming increasingly vulnerable to online
manipulation due to algorithmic advancements and lack of transparency. Such
manipulation decreases users' levels of trust, autonomy, and satisfaction
concerning the systems with which they interact. Increasing transparency is an
important goal for personalization based systems. Unfortunately, system
designers lack guidance in assessing and implementing transparency in their
developed systems.
In this work we combine insights from technology ethics and computer science
to generate a list of transparency best practices for machine generated
personalization. Based on these best practices, we develop a checklist to be
used by designers wishing to evaluate and increase the transparency of their
algorithmic systems. Adopting a designer perspective, we apply the checklist to
prominent online services and discuss its advantages and shortcomings. We
encourage researchers to adopt the checklist in various environments and to
work towards a consensus-based tool for measuring transparency in the
personalization community.
Related papers
- A Room With an Overview: Towards Meaningful Transparency for the
Consumer Internet of Things [5.536922793483742]
This paper explores the practical dimensions to transparency mechanisms within the consumer IoT.
We consider how smart homes might be made more meaningfully transparent, so as to support users in gaining greater understanding, oversight, and control.
arXiv Detail & Related papers (2024-01-19T13:00:36Z) - Through the Looking-Glass: Transparency Implications and Challenges in
Enterprise AI Knowledge Systems [3.7640559288894524]
We present the looking-glass metaphor and use it to conceptualize AI knowledge systems as systems that reflect and distort.
We identify three transparency dimensions necessary to realize the value of AI knowledge systems, namely system transparency, procedural transparency and transparency of outcomes.
arXiv Detail & Related papers (2024-01-17T18:47:30Z) - Path To Gain Functional Transparency In Artificial Intelligence With
Meaningful Explainability [0.0]
As AI systems become increasingly sophisticated, ensuring their transparency and explainability becomes crucial.
We propose a design for user-centered compliant-by-design transparency in transparent systems.
By providing a comprehensive understanding of the challenges associated with transparency in AI systems, we aim to facilitate the development of AI systems that are accountable, trustworthy, and aligned with societal values.
arXiv Detail & Related papers (2023-10-13T04:25:30Z) - Transparent Object Tracking with Enhanced Fusion Module [56.403878717170784]
We propose a new tracker architecture that uses our fusion techniques to achieve superior results for transparent object tracking.
Our results and the implementation of code will be made publicly available at https://github.com/kalyan05TOTEM.
arXiv Detail & Related papers (2023-09-13T03:52:09Z) - Rethinking People Analytics With Inverse Transparency by Design [57.67333075002697]
We propose a new design approach for workforce analytics we refer to as inverse transparency by design.
We find that architectural changes are made without inhibiting core functionality.
We conclude that inverse transparency by design is a promising approach to realize accepted and responsible people analytics.
arXiv Detail & Related papers (2023-05-16T21:37:35Z) - Users are the North Star for AI Transparency [111.5679109784322]
Despite widespread calls for transparent artificial intelligence systems, the term is too overburdened with disparate meanings to express precise policy aims or to orient concrete lines of research.
Part of why this happens is that a clear ideal of AI transparency goes unsaid in this body of work.
We explicitly name such a north star -- transparency that is user-centered, user-appropriate, and honest.
arXiv Detail & Related papers (2023-03-09T18:53:29Z) - FAT Forensics: A Python Toolbox for Implementing and Deploying Fairness,
Accountability and Transparency Algorithms in Predictive Systems [69.24490096929709]
We developed an open source Python package called FAT Forensics.
It can inspect important fairness, accountability and transparency aspects of predictive algorithms.
Our toolbox can evaluate all elements of a predictive pipeline.
arXiv Detail & Related papers (2022-09-08T13:25:02Z) - INTRPRT: A Systematic Review of and Guidelines for Designing and
Validating Transparent AI in Medical Image Analysis [5.3613726625503215]
From a human-centered design perspective, transparency is not a property of the ML model but an affordance, i.e. a relationship between algorithm and user.
Following human-centered design principles in healthcare and medical image analysis is challenging due to the limited availability of and access to end users.
We introduce the INTRPRT guideline, a systematic design directive for transparent ML systems in medical image analysis.
arXiv Detail & Related papers (2021-12-21T05:14:44Z) - Trustworthy Transparency by Design [57.67333075002697]
We propose a transparency framework for software design, incorporating research on user trust and experience.
Our framework enables developing software that incorporates transparency in its design.
arXiv Detail & Related papers (2021-03-19T12:34:01Z) - A framework for fostering transparency in shared artificial intelligence
models by increasing visibility of contributions [0.6850683267295249]
This paper presents a novel method for deriving a quantifiable metric capable of ranking the overall transparency of the process pipelines used to generate AI systems.
The methodology for calculating the metric, and the type of criteria that could be used to make judgements on the visibility of contributions to systems are evaluated.
arXiv Detail & Related papers (2021-03-05T11:28:50Z) - Dimensions of Transparency in NLP Applications [64.16277166331298]
Broader transparency in descriptions of and communication regarding AI systems is widely considered desirable.
Previous work has suggested that a trade-off exists between greater system transparency and user confusion.
arXiv Detail & Related papers (2021-01-02T11:46:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.