Leveraging Large Language Models for Explainable Activity Recognition in Smart Homes: A Critical Evaluation
- URL: http://arxiv.org/abs/2503.16622v1
- Date: Thu, 20 Mar 2025 18:23:03 GMT
- Title: Leveraging Large Language Models for Explainable Activity Recognition in Smart Homes: A Critical Evaluation
- Authors: Michele Fiori, Gabriele Civitarese, Priyankar Choudhary, Claudio Bettini,
- Abstract summary: XAI has been applied to sensor-based Activities of Daily Living (ADLs) recognition in smart homes.<n>This paper investigates potential approaches to combine XAI and Large Language Models (LLMs) for sensor-based ADL recognition.
- Score: 0.29998889086656577
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explainable Artificial Intelligence (XAI) aims to uncover the inner reasoning of machine learning models. In IoT systems, XAI improves the transparency of models processing sensor data from multiple heterogeneous devices, ensuring end-users understand and trust their outputs. Among the many applications, XAI has also been applied to sensor-based Activities of Daily Living (ADLs) recognition in smart homes. Existing approaches highlight which sensor events are most important for each predicted activity, using simple rules to convert these events into natural language explanations for non-expert users. However, these methods produce rigid explanations lacking natural language flexibility and are not scalable. With the recent rise of Large Language Models (LLMs), it is worth exploring whether they can enhance explanation generation, considering their proven knowledge of human activities. This paper investigates potential approaches to combine XAI and LLMs for sensor-based ADL recognition. We evaluate if LLMs can be used: a) as explainable zero-shot ADL recognition models, avoiding costly labeled data collection, and b) to automate the generation of explanations for existing data-driven XAI approaches when training data is available and the goal is higher recognition rates. Our critical evaluation provides insights into the benefits and challenges of using LLMs for explainable ADL recognition.
Related papers
- LLMs for Explainable AI: A Comprehensive Survey [0.7373617024876725]
Large Language Models (LLMs) offer a promising approach to enhancing Explainable AI (XAI)
LLMs transform complex machine learning outputs into easy-to-understand narratives.
LLMs can bridge the gap between sophisticated model behavior and human interpretability.
arXiv Detail & Related papers (2025-03-31T18:19:41Z) - A Soft Sensor Method with Uncertainty-Awareness and Self-Explanation Based on Large Language Models Enhanced by Domain Knowledge Retrieval [17.605817344542345]
We propose a framework called Few-shot Uncertainty-aware and self-Explaining Soft Sensor (LLM-FUESS)
LLM-FUESS includes the Zero-shot Auxiliary Variable Selector (LLM-ZAVS) and the Uncertainty-aware Few-shot Soft Sensor (LLM-UFSS)
Our method achieved state-of-the-art predictive performance, strong robustness, and flexibility, effectively mitigates training instability found in traditional methods.
arXiv Detail & Related papers (2025-01-06T11:43:29Z) - AD-LLM: Benchmarking Large Language Models for Anomaly Detection [50.57641458208208]
This paper introduces AD-LLM, the first benchmark that evaluates how large language models can help with anomaly detection.
We examine three key tasks: zero-shot detection, using LLMs' pre-trained knowledge to perform AD without tasks-specific training; data augmentation, generating synthetic data and category descriptions to improve AD models; and model selection, using LLMs to suggest unsupervised AD models.
arXiv Detail & Related papers (2024-12-15T10:22:14Z) - Learn while Unlearn: An Iterative Unlearning Framework for Generative Language Models [52.03511469562013]
We introduce the Iterative Contrastive Unlearning (ICU) framework, which consists of three core components.<n>A Knowledge Unlearning Induction module targets specific knowledge for removal using an unlearning loss.<n>A Contrastive Learning Enhancement module preserves the model's expressive capabilities against the pure unlearning goal.<n>An Iterative Unlearning Refinement module dynamically adjusts the unlearning process through ongoing evaluation and updates.
arXiv Detail & Related papers (2024-07-25T07:09:35Z) - Using Large Language Models to Compare Explainable Models for Smart Home Human Activity Recognition [0.3277163122167433]
This paper proposes an automatic evaluation method using Large Language Models (LLMs) to identify, in a pool of candidates, the best XAI approach for non-expert users.
Our preliminary results suggest that LLM evaluation aligns with user surveys.
arXiv Detail & Related papers (2024-07-24T12:15:07Z) - Large Language Models are Zero-Shot Recognizers for Activities of Daily Living [0.29998889086656577]
We propose ADL-LLM, a novel LLM-based ADLs recognition system.<n>ADL-LLM transforms raw sensor data into textual representations, that are processed by an LLM to perform zero-shot ADLs recognition.<n>We evaluate ADL-LLM on two public datasets, showing its effectiveness in this domain.
arXiv Detail & Related papers (2024-07-01T12:32:38Z) - LLMs for XAI: Future Directions for Explaining Explanations [50.87311607612179]
We focus on refining explanations computed using existing XAI algorithms.
Initial experiments and user study suggest that LLMs offer a promising way to enhance the interpretability and usability of XAI.
arXiv Detail & Related papers (2024-05-09T19:17:47Z) - The Frontier of Data Erasure: Machine Unlearning for Large Language Models [56.26002631481726]
Large Language Models (LLMs) are foundational to AI advancements.
LLMs pose risks by potentially memorizing and disseminating sensitive, biased, or copyrighted information.
Machine unlearning emerges as a cutting-edge solution to mitigate these concerns.
arXiv Detail & Related papers (2024-03-23T09:26:15Z) - Usable XAI: 10 Strategies Towards Exploiting Explainability in the LLM Era [77.174117675196]
XAI is being extended towards Large Language Models (LLMs)
This paper analyzes how XAI can benefit LLMs and AI systems.
We introduce 10 strategies, introducing the key techniques for each and discussing their associated challenges.
arXiv Detail & Related papers (2024-03-13T20:25:27Z) - Going Beyond XAI: A Systematic Survey for Explanation-Guided Learning [8.835733039270364]
Techniques in Explainable Artificial Intelligence (XAI) are attracting considerable attention, and have tremendously helped Machine Learning (ML) engineers in understanding AI models.
This article provides a timely and extensive literature overview of the field Explanation-Guided Learning (EGL)
EGL is a domain of techniques that steer the DNNs' reasoning process by adding regularization, supervision, or intervention on model explanations.
arXiv Detail & Related papers (2022-12-07T20:59:59Z) - Robotic Skill Acquisition via Instruction Augmentation with
Vision-Language Models [70.82705830137708]
We introduce Data-driven Instruction Augmentation for Language-conditioned control (DIAL)
We utilize semi-language labels leveraging the semantic understanding of CLIP to propagate knowledge onto large datasets of unlabelled demonstration data.
DIAL enables imitation learning policies to acquire new capabilities and generalize to 60 novel instructions unseen in the original dataset.
arXiv Detail & Related papers (2022-11-21T18:56:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.