Explainable Activity Recognition for Smart Home Systems
- URL: http://arxiv.org/abs/2105.09787v2
- Date: Fri, 26 May 2023 16:21:50 GMT
- Title: Explainable Activity Recognition for Smart Home Systems
- Authors: Devleena Das, Yasutaka Nishimura, Rajan P. Vivek, Naoto Takeda, Sean
T. Fish, Thomas Ploetz, Sonia Chernova
- Abstract summary: We build on insights from Explainable Artificial Intelligence (XAI) techniques to develop an explainable activity recognition framework.
Our results show that the XAI approach, SHAP, has a 92% success rate in generating sensible explanations.
In 83% of sampled scenarios users preferred natural language explanations over a simple activity label.
- Score: 9.909901668370589
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Smart home environments are designed to provide services that help improve
the quality of life for the occupant via a variety of sensors and actuators
installed throughout the space. Many automated actions taken by a smart home
are governed by the output of an underlying activity recognition system.
However, activity recognition systems may not be perfectly accurate and
therefore inconsistencies in smart home operations can lead users reliant on
smart home predictions to wonder "why did the smart home do that?" In this
work, we build on insights from Explainable Artificial Intelligence (XAI)
techniques and introduce an explainable activity recognition framework in which
we leverage leading XAI methods to generate natural language explanations that
explain what about an activity led to the given classification. Within the
context of remote caregiver monitoring, we perform a two-step evaluation: (a)
utilize ML experts to assess the sensibility of explanations, and (b) recruit
non-experts in two user remote caregiver monitoring scenarios, synchronous and
asynchronous, to assess the effectiveness of explanations generated via our
framework. Our results show that the XAI approach, SHAP, has a 92% success rate
in generating sensible explanations. Moreover, in 83% of sampled scenarios
users preferred natural language explanations over a simple activity label,
underscoring the need for explainable activity recognition systems. Finally, we
show that explanations generated by some XAI methods can lead users to lose
confidence in the accuracy of the underlying activity recognition model. We
make a recommendation regarding which existing XAI method leads to the best
performance in the domain of smart home automation, and discuss a range of
topics for future work to further improve explainable activity recognition.
Related papers
- Adaptive Language-Guided Abstraction from Contrastive Explanations [53.48583372522492]
It is necessary to determine which features of the environment are relevant before determining how these features should be used to compute reward.
End-to-end methods for joint feature and reward learning often yield brittle reward functions that are sensitive to spurious state features.
This paper describes a method named ALGAE which alternates between using language models to iteratively identify human-meaningful features.
arXiv Detail & Related papers (2024-09-12T16:51:58Z) - Using Large Language Models to Compare Explainable Models for Smart Home Human Activity Recognition [0.3277163122167433]
This paper proposes an automatic evaluation method using Large Language Models (LLMs) to identify, in a pool of candidates, the best XAI approach for non-expert users.
Our preliminary results suggest that LLM evaluation aligns with user surveys.
arXiv Detail & Related papers (2024-07-24T12:15:07Z) - Layout Agnostic Human Activity Recognition in Smart Homes through Textual Descriptions Of Sensor Triggers (TDOST) [0.22354214294493352]
We develop a layout-agnostic modeling approach for human activity recognition (HAR) systems in smart homes.
We generate Textual Descriptions Of Sensor Triggers (TDOST) that encapsulate the surrounding trigger conditions.
We demonstrate the effectiveness of TDOST-based models in unseen smart homes through experiments on benchmarked CASAS datasets.
arXiv Detail & Related papers (2024-05-20T20:37:44Z) - Explaining AI Decisions: Towards Achieving Human-Centered Explainability in Smart Home Environments [45.55363754551388]
We argue for the necessity of a human-centric approach in representing explanations in smart home systems.
This paper advocates for human-centered XAI methods, emphasizing the importance of delivering readily comprehensible explanations.
arXiv Detail & Related papers (2024-04-23T22:31:42Z) - Unsupervised Self-Driving Attention Prediction via Uncertainty Mining
and Knowledge Embedding [51.8579160500354]
We propose an unsupervised way to predict self-driving attention by uncertainty modeling and driving knowledge integration.
Results show equivalent or even more impressive performance compared to fully-supervised state-of-the-art approaches.
arXiv Detail & Related papers (2023-03-17T00:28:33Z) - Learning Action-Effect Dynamics for Hypothetical Vision-Language
Reasoning Task [50.72283841720014]
We propose a novel learning strategy that can improve reasoning about the effects of actions.
We demonstrate the effectiveness of our proposed approach and discuss its advantages over previous baselines in terms of performance, data efficiency, and generalization capability.
arXiv Detail & Related papers (2022-12-07T05:41:58Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Backprop-Free Reinforcement Learning with Active Neural Generative
Coding [84.11376568625353]
We propose a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.
We develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference.
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.
arXiv Detail & Related papers (2021-07-10T19:02:27Z) - Domain-Robust Visual Imitation Learning with Mutual Information
Constraints [0.0]
We introduce a new algorithm called Disentangling Generative Adversarial Imitation Learning (DisentanGAIL)
Our algorithm enables autonomous agents to learn directly from high dimensional observations of an expert performing a task.
arXiv Detail & Related papers (2021-03-08T21:18:58Z) - Enabling Edge Cloud Intelligence for Activity Learning in Smart Home [1.3858051019755284]
We propose a novel activity learning framework based on Edge Cloud architecture.
We utilize temporal features for activity recognition and prediction in a single smart home setting.
arXiv Detail & Related papers (2020-05-14T11:43:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.