Human-centered Explainable AI: Towards a Reflective Sociotechnical
Approach
- URL: http://arxiv.org/abs/2002.01092v2
- Date: Wed, 5 Feb 2020 05:33:14 GMT
- Title: Human-centered Explainable AI: Towards a Reflective Sociotechnical
Approach
- Authors: Upol Ehsan and Mark O. Riedl
- Abstract summary: We introduce Human-centered Explainable AI (HCXAI) as an approach that puts the human at the center of technology design.
It develops a holistic understanding of "who" the human is by considering the interplay of values, interpersonal dynamics, and the socially situated nature of AI systems.
- Score: 18.14698948294366
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Explanations--a form of post-hoc interpretability--play an instrumental role
in making systems accessible as AI continues to proliferate complex and
sensitive sociotechnical systems. In this paper, we introduce Human-centered
Explainable AI (HCXAI) as an approach that puts the human at the center of
technology design. It develops a holistic understanding of "who" the human is
by considering the interplay of values, interpersonal dynamics, and the
socially situated nature of AI systems. In particular, we advocate for a
reflective sociotechnical approach. We illustrate HCXAI through a case study of
an explanation system for non-technical end-users that shows how technical
advancements and the understanding of human factors co-evolve. Building on the
case study, we lay out open research questions pertaining to further refining
our understanding of "who" the human is and extending beyond 1-to-1
human-computer interactions. Finally, we propose that a reflective HCXAI
paradigm-mediated through the perspective of Critical Technical Practice and
supplemented with strategies from HCI, such as value-sensitive design and
participatory design--not only helps us understand our intellectual blind
spots, but it can also open up new design and research spaces.
Related papers
- Towards Bidirectional Human-AI Alignment: A Systematic Review for Clarifications, Framework, and Future Directions [101.67121669727354]
Recent advancements in AI have highlighted the importance of guiding AI systems towards the intended goals, ethical principles, and values of individuals and groups, a concept broadly recognized as alignment.
The lack of clarified definitions and scopes of human-AI alignment poses a significant obstacle, hampering collaborative efforts across research domains to achieve this alignment.
We introduce a systematic review of over 400 papers published between 2019 and January 2024, spanning multiple domains such as Human-Computer Interaction (HCI), Natural Language Processing (NLP), Machine Learning (ML)
arXiv Detail & Related papers (2024-06-13T16:03:25Z) - Advancing Explainable AI Toward Human-Like Intelligence: Forging the
Path to Artificial Brain [0.7770029179741429]
The intersection of Artificial Intelligence (AI) and neuroscience in Explainable AI (XAI) is pivotal for enhancing transparency and interpretability in complex decision-making processes.
This paper explores the evolution of XAI methodologies, ranging from feature-based to human-centric approaches.
The challenges in achieving explainability in generative models, ensuring responsible AI practices, and addressing ethical implications are discussed.
arXiv Detail & Related papers (2024-02-07T14:09:11Z) - On the Emergence of Symmetrical Reality [51.21203247240322]
We introduce the symmetrical reality framework, which offers a unified representation encompassing various forms of physical-virtual amalgamations.
We propose an instance of an AI-driven active assistance service that illustrates the potential applications of symmetrical reality.
arXiv Detail & Related papers (2024-01-26T16:09:39Z) - Human-AI Coevolution [48.74579595505374]
Coevolution AI is a process in which humans and AI algorithms continuously influence each other.
This paper introduces Coevolution AI as the cornerstone for a new field of study at the intersection between AI and complexity science.
arXiv Detail & Related papers (2023-06-23T18:10:54Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - On some Foundational Aspects of Human-Centered Artificial Intelligence [52.03866242565846]
There is no clear definition of what is meant by Human Centered Artificial Intelligence.
This paper introduces the term HCAI agent to refer to any physical or software computational agent equipped with AI components.
We see the notion of HCAI agent, together with its components and functions, as a way to bridge the technical and non-technical discussions on human-centered AI.
arXiv Detail & Related papers (2021-12-29T09:58:59Z) - Human-Centered Explainable AI (XAI): From Algorithms to User Experiences [29.10123472973571]
explainable AI (XAI) has produced a vast collection of algorithms in recent years.
The field is starting to embrace inter-disciplinary perspectives and human-centered approaches.
arXiv Detail & Related papers (2021-10-20T21:33:46Z) - Thinking Fast and Slow in AI [38.8581204791644]
This paper proposes a research direction to advance AI which draws inspiration from cognitive theories of human decision making.
The premise is that if we gain insights about the causes of some human capabilities that are still lacking in AI, we may obtain similar capabilities in an AI system.
arXiv Detail & Related papers (2020-10-12T20:10:05Z) - Human Evaluation of Interpretability: The Case of AI-Generated Music
Knowledge [19.508678969335882]
We focus on evaluating AI-discovered knowledge/rules in the arts and humanities.
We present an experimental procedure to collect and assess human-generated verbal interpretations of AI-generated music theory/rules rendered as sophisticated symbolic/numeric objects.
arXiv Detail & Related papers (2020-04-15T06:03:34Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.