Going Beyond XAI: A Systematic Survey for Explanation-Guided Learning
- URL: http://arxiv.org/abs/2212.03954v1
- Date: Wed, 7 Dec 2022 20:59:59 GMT
- Title: Going Beyond XAI: A Systematic Survey for Explanation-Guided Learning
- Authors: Yuyang Gao, Siyi Gu, Junji Jiang, Sungsoo Ray Hong, Dazhou Yu, Liang
Zhao
- Abstract summary: Techniques in Explainable Artificial Intelligence (XAI) are attracting considerable attention, and have tremendously helped Machine Learning (ML) engineers in understanding AI models.
This article provides a timely and extensive literature overview of the field Explanation-Guided Learning (EGL)
EGL is a domain of techniques that steer the DNNs' reasoning process by adding regularization, supervision, or intervention on model explanations.
- Score: 8.835733039270364
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: As the societal impact of Deep Neural Networks (DNNs) grows, the goals for
advancing DNNs become more complex and diverse, ranging from improving a
conventional model accuracy metric to infusing advanced human virtues such as
fairness, accountability, transparency (FaccT), and unbiasedness. Recently,
techniques in Explainable Artificial Intelligence (XAI) are attracting
considerable attention, and have tremendously helped Machine Learning (ML)
engineers in understanding AI models. However, at the same time, we started to
witness the emerging need beyond XAI among AI communities; based on the
insights learned from XAI, how can we better empower ML engineers in steering
their DNNs so that the model's reasonableness and performance can be improved
as intended? This article provides a timely and extensive literature overview
of the field Explanation-Guided Learning (EGL), a domain of techniques that
steer the DNNs' reasoning process by adding regularization, supervision, or
intervention on model explanations. In doing so, we first provide a formal
definition of EGL and its general learning paradigm. Secondly, an overview of
the key factors for EGL evaluation, as well as summarization and categorization
of existing evaluation procedures and metrics for EGL are provided. Finally,
the current and potential future application areas and directions of EGL are
discussed, and an extensive experimental study is presented aiming at providing
comprehensive comparative studies among existing EGL models in various popular
application domains, such as Computer Vision (CV) and Natural Language
Processing (NLP) domains.
Related papers
- LLMs for Explainable AI: A Comprehensive Survey [0.7373617024876725]
Large Language Models (LLMs) offer a promising approach to enhancing Explainable AI (XAI)
LLMs transform complex machine learning outputs into easy-to-understand narratives.
LLMs can bridge the gap between sophisticated model behavior and human interpretability.
arXiv Detail & Related papers (2025-03-31T18:19:41Z) - Leveraging Large Language Models for Explainable Activity Recognition in Smart Homes: A Critical Evaluation [0.29998889086656577]
XAI has been applied to sensor-based Activities of Daily Living (ADLs) recognition in smart homes.
This paper investigates potential approaches to combine XAI and Large Language Models (LLMs) for sensor-based ADL recognition.
arXiv Detail & Related papers (2025-03-20T18:23:03Z) - Graph Foundation Models for Recommendation: A Comprehensive Survey [55.70529188101446]
Large language models (LLMs) are designed to process and comprehend natural language, making both approaches highly effective and widely adopted.
Recent research has focused on graph foundation models (GFMs)
GFMs integrate the strengths of GNNs and LLMs to model complex RS problems more efficiently by leveraging the graph-based structure of user-item relationships alongside textual understanding.
arXiv Detail & Related papers (2025-02-12T12:13:51Z) - Explainable artificial intelligence (XAI): from inherent explainability to large language models [0.0]
Explainable AI (XAI) techniques facilitate the explainability or interpretability of machine learning models.
This paper details the advancements of explainable AI methods, from inherently interpretable models to modern approaches.
We review explainable AI techniques that leverage vision-language model (VLM) frameworks to automate or improve the explainability of other machine learning models.
arXiv Detail & Related papers (2025-01-17T06:16:57Z) - Cutting Through the Confusion and Hype: Understanding the True Potential of Generative AI [0.0]
This paper explores the nuanced landscape of generative AI (genAI)
It focuses on neural network-based models like Large Language Models (LLMs)
arXiv Detail & Related papers (2024-10-22T02:18:44Z) - Seeing Eye to AI: Human Alignment via Gaze-Based Response Rewards for Large Language Models [46.09562860220433]
We introduce GazeReward, a novel framework that integrates implicit feedback -- and specifically eye-tracking (ET) data -- into the Reward Model (RM)
Our approach significantly improves the accuracy of the RM on established human preference datasets.
arXiv Detail & Related papers (2024-10-02T13:24:56Z) - XEdgeAI: A Human-centered Industrial Inspection Framework with Data-centric Explainable Edge AI Approach [2.0209172586699173]
This paper introduces a novel XAI-integrated Visual Quality Inspection framework.
Our framework incorporates XAI and the Large Vision Language Model to deliver human-centered interpretability.
This approach paves the way for the broader adoption of reliable and interpretable AI tools in critical industrial applications.
arXiv Detail & Related papers (2024-07-16T14:30:24Z) - Evolutionary Computation and Explainable AI: A Roadmap to Understandable Intelligent Systems [37.02462866600066]
Evolutionary computation (EC) offers significant potential to contribute to explainable AI (XAI)
This paper provides an introduction to XAI and reviews current techniques for explaining machine learning models.
We then explore how EC can be leveraged in XAI and examine existing XAI approaches that incorporate EC techniques.
arXiv Detail & Related papers (2024-06-12T02:06:24Z) - A Survey on RAG Meeting LLMs: Towards Retrieval-Augmented Large Language Models [71.25225058845324]
Large Language Models (LLMs) have demonstrated revolutionary abilities in language understanding and generation.
Retrieval-Augmented Generation (RAG) can offer reliable and up-to-date external knowledge.
RA-LLMs have emerged to harness external and authoritative knowledge bases, rather than relying on the model's internal knowledge.
arXiv Detail & Related papers (2024-05-10T02:48:45Z) - On the Generalization Capability of Temporal Graph Learning Algorithms:
Theoretical Insights and a Simpler Method [59.52204415829695]
Temporal Graph Learning (TGL) has become a prevalent technique across diverse real-world applications.
This paper investigates the generalization ability of different TGL algorithms.
We propose a simplified TGL network, which enjoys a small generalization error, improved overall performance, and lower model complexity.
arXiv Detail & Related papers (2024-02-26T08:22:22Z) - Explainability for Large Language Models: A Survey [59.67574757137078]
Large language models (LLMs) have demonstrated impressive capabilities in natural language processing.
This paper introduces a taxonomy of explainability techniques and provides a structured overview of methods for explaining Transformer-based language models.
arXiv Detail & Related papers (2023-09-02T22:14:26Z) - Recommender Systems in the Era of Large Language Models (LLMs) [62.0129013439038]
Large Language Models (LLMs) have revolutionized the fields of Natural Language Processing (NLP) and Artificial Intelligence (AI)
We conduct a comprehensive review of LLM-empowered recommender systems from various aspects including Pre-training, Fine-tuning, and Prompting.
arXiv Detail & Related papers (2023-07-05T06:03:40Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - Towards Interpretable Deep Learning Models for Knowledge Tracing [62.75876617721375]
We propose to adopt the post-hoc method to tackle the interpretability issue for deep learning based knowledge tracing (DLKT) models.
Specifically, we focus on applying the layer-wise relevance propagation (LRP) method to interpret RNN-based DLKT model.
Experiment results show the feasibility using the LRP method for interpreting the DLKT model's predictions.
arXiv Detail & Related papers (2020-05-13T04:03:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.