Mediating Community-AI Interaction through Situated Explanation: The
Case of AI-Led Moderation
- URL: http://arxiv.org/abs/2008.08202v1
- Date: Wed, 19 Aug 2020 00:13:12 GMT
- Title: Mediating Community-AI Interaction through Situated Explanation: The
Case of AI-Led Moderation
- Authors: Yubo Kou and Xinning Gui
- Abstract summary: We theorize how explanation is situated in a community's shared values, norms, knowledge, and practices.
We then present a case study of AI-led moderation, where community members collectively develop explanations of AI-led decisions.
- Score: 32.50902508512016
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial intelligence (AI) has become prevalent in our everyday
technologies and impacts both individuals and communities. The explainable AI
(XAI) scholarship has explored the philosophical nature of explanation and
technical explanations, which are usually driven by experts in lab settings and
can be challenging for laypersons to understand. In addition, existing XAI
research tends to focus on the individual level. Little is known about how
people understand and explain AI-led decisions in the community context.
Drawing from XAI and activity theory, a foundational HCI theory, we theorize
how explanation is situated in a community's shared values, norms, knowledge,
and practices, and how situated explanation mediates community-AI interaction.
We then present a case study of AI-led moderation, where community members
collectively develop explanations of AI-led decisions, most of which are
automated punishments. Lastly, we discuss the implications of this framework at
the intersection of CSCW, HCI, and XAI.
Related papers
- Investigating the Role of Explainability and AI Literacy in User Compliance [2.8623940003518156]
We find that users' compliance increases with the introduction of XAI but is also affected by AI literacy.
We also find that the relationships between AI literacy XAI and users' compliance are mediated by the users' mental model of AI.
arXiv Detail & Related papers (2024-06-18T14:28:12Z) - Advancing Explainable AI Toward Human-Like Intelligence: Forging the
Path to Artificial Brain [0.7770029179741429]
The intersection of Artificial Intelligence (AI) and neuroscience in Explainable AI (XAI) is pivotal for enhancing transparency and interpretability in complex decision-making processes.
This paper explores the evolution of XAI methodologies, ranging from feature-based to human-centric approaches.
The challenges in achieving explainability in generative models, ensuring responsible AI practices, and addressing ethical implications are discussed.
arXiv Detail & Related papers (2024-02-07T14:09:11Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - On Two XAI Cultures: A Case Study of Non-technical Explanations in
Deployed AI System [3.4918511133757977]
Not much of XAI is comprehensible to non-AI experts, who nonetheless are the primary audience and major stakeholders of deployed AI systems in practice.
We advocate that it is critical to develop XAI methods for non-technical audiences.
We then present a real-life case study, where AI experts provided non-technical explanations of AI decisions to non-technical stakeholders.
arXiv Detail & Related papers (2021-12-02T07:02:27Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - The human-AI relationship in decision-making: AI explanation to support
people on justifying their decisions [4.169915659794568]
People need more awareness of how AI works and its outcomes to build a relationship with that system.
In decision-making scenarios, people need more awareness of how AI works and its outcomes to build a relationship with that system.
arXiv Detail & Related papers (2021-02-10T14:28:34Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.