Applying Transparency in Artificial Intelligence based Personalization
Systems
- URL: http://arxiv.org/abs/2004.00935v2
- Date: Fri, 21 Aug 2020 13:49:54 GMT
- Title: Applying Transparency in Artificial Intelligence based Personalization
Systems
- Authors: Laura Schelenz, Avi Segal, and Kobi Gal
- Abstract summary: Increasing transparency is an important goal for personalization based systems.
We combine insights from technology ethics and computer science to generate a list of transparency best practices for machine generated personalization.
Based on these best practices, we develop a checklist to be used by designers wishing to evaluate and increase the transparency of their algorithmic systems.
- Score: 5.671950073691286
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial Intelligence based systems increasingly use personalization to
provide users with relevant content, products, and solutions. Personalization
is intended to support users and address their respective needs and
preferences. However, users are becoming increasingly vulnerable to online
manipulation due to algorithmic advancements and lack of transparency. Such
manipulation decreases users' levels of trust, autonomy, and satisfaction
concerning the systems with which they interact. Increasing transparency is an
important goal for personalization based systems. Unfortunately, system
designers lack guidance in assessing and implementing transparency in their
developed systems.
In this work we combine insights from technology ethics and computer science
to generate a list of transparency best practices for machine generated
personalization. Based on these best practices, we develop a checklist to be
used by designers wishing to evaluate and increase the transparency of their
algorithmic systems. Adopting a designer perspective, we apply the checklist to
prominent online services and discuss its advantages and shortcomings. We
encourage researchers to adopt the checklist in various environments and to
work towards a consensus-based tool for measuring transparency in the
personalization community.
Related papers
- GUI Agents: A Survey [129.94551809688377]
Graphical User Interface (GUI) agents, powered by Large Foundation Models, have emerged as a transformative approach to automating human-computer interaction.
Motivated by the growing interest and fundamental importance of GUI agents, we provide a comprehensive survey that categorizes their benchmarks, evaluation metrics, architectures, and training methods.
arXiv Detail & Related papers (2024-12-18T04:48:28Z) - A Confidential Computing Transparency Framework for a Comprehensive Trust Chain [7.9699781371465965]
Confidential Computing enhances privacy of data in-use through hardware-based Trusted Execution Environments.
TEEs require user trust, as they cannot guarantee the absence of vulnerabilities or backdoors.
We propose a three-level conceptual framework providing organisations with a practical pathway to incrementally improve Confidential Computing transparency.
arXiv Detail & Related papers (2024-09-05T17:24:05Z) - Transparency, Privacy, and Fairness in Recommender Systems [0.19036571490366497]
This habilitation elaborates on aspects related to (i) transparency and cognitive models, (ii) privacy and limited preference information, and (iii) fairness and popularity bias in recommender systems.
arXiv Detail & Related papers (2024-06-17T08:37:14Z) - A Room With an Overview: Towards Meaningful Transparency for the
Consumer Internet of Things [5.536922793483742]
This paper explores the practical dimensions to transparency mechanisms within the consumer IoT.
We consider how smart homes might be made more meaningfully transparent, so as to support users in gaining greater understanding, oversight, and control.
arXiv Detail & Related papers (2024-01-19T13:00:36Z) - Through the Looking-Glass: Transparency Implications and Challenges in Enterprise AI Knowledge Systems [3.9143193313607085]
We present a reflective analysis of transparency requirements and impacts in AI knowledge systems.
We formulate transparency as a key mediator in shaping different ways of seeing.
We identify three transparency dimensions necessary to realize the value of AI knowledge systems.
arXiv Detail & Related papers (2024-01-17T18:47:30Z) - Path To Gain Functional Transparency In Artificial Intelligence With
Meaningful Explainability [0.0]
As AI systems become increasingly sophisticated, ensuring their transparency and explainability becomes crucial.
We propose a design for user-centered compliant-by-design transparency in transparent systems.
By providing a comprehensive understanding of the challenges associated with transparency in AI systems, we aim to facilitate the development of AI systems that are accountable, trustworthy, and aligned with societal values.
arXiv Detail & Related papers (2023-10-13T04:25:30Z) - Rethinking People Analytics With Inverse Transparency by Design [57.67333075002697]
We propose a new design approach for workforce analytics we refer to as inverse transparency by design.
We find that architectural changes are made without inhibiting core functionality.
We conclude that inverse transparency by design is a promising approach to realize accepted and responsible people analytics.
arXiv Detail & Related papers (2023-05-16T21:37:35Z) - Users are the North Star for AI Transparency [111.5679109784322]
Despite widespread calls for transparent artificial intelligence systems, the term is too overburdened with disparate meanings to express precise policy aims or to orient concrete lines of research.
Part of why this happens is that a clear ideal of AI transparency goes unsaid in this body of work.
We explicitly name such a north star -- transparency that is user-centered, user-appropriate, and honest.
arXiv Detail & Related papers (2023-03-09T18:53:29Z) - FAT Forensics: A Python Toolbox for Implementing and Deploying Fairness,
Accountability and Transparency Algorithms in Predictive Systems [69.24490096929709]
We developed an open source Python package called FAT Forensics.
It can inspect important fairness, accountability and transparency aspects of predictive algorithms.
Our toolbox can evaluate all elements of a predictive pipeline.
arXiv Detail & Related papers (2022-09-08T13:25:02Z) - Trustworthy Transparency by Design [57.67333075002697]
We propose a transparency framework for software design, incorporating research on user trust and experience.
Our framework enables developing software that incorporates transparency in its design.
arXiv Detail & Related papers (2021-03-19T12:34:01Z) - Dimensions of Transparency in NLP Applications [64.16277166331298]
Broader transparency in descriptions of and communication regarding AI systems is widely considered desirable.
Previous work has suggested that a trade-off exists between greater system transparency and user confusion.
arXiv Detail & Related papers (2021-01-02T11:46:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.