Expanding Explainability: Towards Social Transparency in AI systems
- URL: http://arxiv.org/abs/2101.04719v1
- Date: Tue, 12 Jan 2021 19:44:27 GMT
- Title: Expanding Explainability: Towards Social Transparency in AI systems
- Authors: Upol Ehsan, Q. Vera Liao, Michael Muller, Mark O. Riedl, Justin D.
Weisz
- Abstract summary: Social Transparency (ST) is a socio-technically informed perspective that incorporates the socio-organizational context into explaining AI-mediated decision-making.
Our work contributes to the discourse of Human-Centered XAI by expanding the design space of XAI.
- Score: 20.41177660318785
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: As AI-powered systems increasingly mediate consequential decision-making,
their explainability is critical for end-users to take informed and accountable
actions. Explanations in human-human interactions are socially-situated. AI
systems are often socio-organizationally embedded. However, Explainable AI
(XAI) approaches have been predominantly algorithm-centered. We take a
developmental step towards socially-situated XAI by introducing and exploring
Social Transparency (ST), a sociotechnically informed perspective that
incorporates the socio-organizational context into explaining AI-mediated
decision-making. To explore ST conceptually, we conducted interviews with 29 AI
users and practitioners grounded in a speculative design scenario. We suggested
constitutive design elements of ST and developed a conceptual framework to
unpack ST's effect and implications at the technical, decision-making, and
organizational level. The framework showcases how ST can potentially calibrate
trust in AI, improve decision-making, facilitate organizational collective
actions, and cultivate holistic explainability. Our work contributes to the
discourse of Human-Centered XAI by expanding the design space of XAI.
Related papers
- Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Addressing Social Misattributions of Large Language Models: An HCXAI-based Approach [45.74830585715129]
We suggest extending the Social Transparency (ST) framework to address the risks of social misattributions in Large Language Models (LLMs)
LLMs may lead to mismatches between designers' intentions and users' perceptions of social attributes, risking to promote emotional manipulation and dangerous behaviors.
We propose enhancing the ST framework with a fifth 'W-question' to clarify the specific social attributions assigned to LLMs by its designers and users.
arXiv Detail & Related papers (2024-03-26T17:02:42Z) - Towards Human-AI Deliberation: Design and Evaluation of LLM-Empowered Deliberative AI for AI-Assisted Decision-Making [47.33241893184721]
In AI-assisted decision-making, humans often passively review AI's suggestion and decide whether to accept or reject it as a whole.
We propose Human-AI Deliberation, a novel framework to promote human reflection and discussion on conflicting human-AI opinions in decision-making.
Based on theories in human deliberation, this framework engages humans and AI in dimension-level opinion elicitation, deliberative discussion, and decision updates.
arXiv Detail & Related papers (2024-03-25T14:34:06Z) - Incentive Compatibility for AI Alignment in Sociotechnical Systems:
Positions and Prospects [11.086872298007835]
Existing methodologies primarily focus on technical facets, often neglecting the intricate sociotechnical nature of AI systems.
We posit a new problem worth exploring: Incentive Compatibility Sociotechnical Alignment Problem (ICSAP)
We discuss three classical game problems for achieving IC: mechanism design, contract theory, and Bayesian persuasion, in addressing the perspectives, potentials, and challenges of solving ICSAP.
arXiv Detail & Related papers (2024-02-20T10:52:57Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Mediating Community-AI Interaction through Situated Explanation: The
Case of AI-Led Moderation [32.50902508512016]
We theorize how explanation is situated in a community's shared values, norms, knowledge, and practices.
We then present a case study of AI-led moderation, where community members collectively develop explanations of AI-led decisions.
arXiv Detail & Related papers (2020-08-19T00:13:12Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z) - Human-centered Explainable AI: Towards a Reflective Sociotechnical
Approach [18.14698948294366]
We introduce Human-centered Explainable AI (HCXAI) as an approach that puts the human at the center of technology design.
It develops a holistic understanding of "who" the human is by considering the interplay of values, interpersonal dynamics, and the socially situated nature of AI systems.
arXiv Detail & Related papers (2020-02-04T02:30:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.