CoRTX: Contrastive Framework for Real-time Explanation
- URL: http://arxiv.org/abs/2303.02794v1
- Date: Sun, 5 Mar 2023 23:01:02 GMT
- Title: CoRTX: Contrastive Framework for Real-time Explanation
- Authors: Yu-Neng Chuang, Guanchu Wang, Fan Yang, Quan Zhou, Pushkar Tripathi,
Xuanting Cai, Xia Hu
- Abstract summary: We propose a COntrastive Real-Time eXplanation (CoRTX) framework to learn the explanation-oriented representation.
Specifically, we design a synthetic strategy to select positive and negative instances for the learning of explanation.
- Score: 39.80484758207907
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advancements in explainable machine learning provide effective and
faithful solutions for interpreting model behaviors. However, many explanation
methods encounter efficiency issues, which largely limit their deployments in
practical scenarios. Real-time explainer (RTX) frameworks have thus been
proposed to accelerate the model explanation process by learning a
one-feed-forward explainer. Existing RTX frameworks typically build the
explainer under the supervised learning paradigm, which requires large amounts
of explanation labels as the ground truth. Considering that accurate
explanation labels are usually hard to obtain due to constrained computational
resources and limited human efforts, effective explainer training is still
challenging in practice. In this work, we propose a COntrastive Real-Time
eXplanation (CoRTX) framework to learn the explanation-oriented representation
and relieve the intensive dependence of explainer training on explanation
labels. Specifically, we design a synthetic strategy to select positive and
negative instances for the learning of explanation. Theoretical analysis show
that our selection strategy can benefit the contrastive learning process on
explanation tasks. Experimental results on three real-world datasets further
demonstrate the efficiency and efficacy of our proposed CoRTX framework.
Related papers
- Make LLMs better zero-shot reasoners: Structure-orientated autonomous reasoning [52.83539473110143]
We introduce a novel structure-oriented analysis method to help Large Language Models (LLMs) better understand a question.
To further improve the reliability in complex question-answering tasks, we propose a multi-agent reasoning system, Structure-oriented Autonomous Reasoning Agents (SARA)
Extensive experiments verify the effectiveness of the proposed reasoning system. Surprisingly, in some cases, the system even surpasses few-shot methods.
arXiv Detail & Related papers (2024-10-18T05:30:33Z) - Fast Explanations via Policy Gradient-Optimized Explainer [7.011763596804071]
This paper introduces a novel framework that represents attribution-based explanations via probability distributions.
The proposed framework offers a robust, scalable solution for real-time, large-scale model explanations.
We validate our framework on image and text classification tasks and the experiments demonstrate that our method reduces inference time by over 97% and memory usage by 70%.
arXiv Detail & Related papers (2024-05-29T00:01:40Z) - RouteExplainer: An Explanation Framework for Vehicle Routing Problem [1.7034420812099471]
We propose RouteExplainer, a post-hoc explanation framework that explains the influence of each edge in a generated route.
Our framework realizes this by rethinking a route as the sequence of actions and extending counterfactual explanations based on the action influence model to VRP.
To enhance the explanation, we additionally propose an edge classifier that infers the intentions of each edge, a loss function to train the edge classifier, and explanation-text generation by Large Language Models (LLMs)
arXiv Detail & Related papers (2024-03-06T10:01:35Z) - LaRS: Latent Reasoning Skills for Chain-of-Thought Reasoning [61.7853049843921]
Chain-of-thought (CoT) prompting is a popular in-context learning approach for large language models (LLMs)
This paper introduces a new approach named Latent Reasoning Skills (LaRS) that employs unsupervised learning to create a latent space representation of rationales.
arXiv Detail & Related papers (2023-12-07T20:36:10Z) - REX: Rapid Exploration and eXploitation for AI Agents [103.68453326880456]
We propose an enhanced approach for Rapid Exploration and eXploitation for AI Agents called REX.
REX introduces an additional layer of rewards and integrates concepts similar to Upper Confidence Bound (UCB) scores, leading to more robust and efficient AI agent performance.
arXiv Detail & Related papers (2023-07-18T04:26:33Z) - Complementary Explanations for Effective In-Context Learning [77.83124315634386]
Large language models (LLMs) have exhibited remarkable capabilities in learning from explanations in prompts.
This work aims to better understand the mechanisms by which explanations are used for in-context learning.
arXiv Detail & Related papers (2022-11-25T04:40:47Z) - RELAX: Representation Learning Explainability [10.831313203043514]
We propose RELAX, which is the first approach for attribution-based explanations of representations.
ReLAX explains representations by measuring similarities in the representation space between an input and masked out versions of itself.
We provide theoretical interpretations of RELAX and conduct a novel analysis of feature extractors trained using supervised and unsupervised learning.
arXiv Detail & Related papers (2021-12-19T14:51:31Z) - On the Objective Evaluation of Post Hoc Explainers [10.981508361941335]
Modern trends in machine learning research have led to algorithms that are increasingly intricate to the degree that they are considered to be black boxes.
In an effort to reduce the opacity of decisions, methods have been proposed to construe the inner workings of such models in a human-comprehensible manner.
We propose a framework for the evaluation of post hoc explainers on ground truth that is directly derived from the additive structure of a model.
arXiv Detail & Related papers (2021-06-15T19:06:51Z) - Towards Interpretable Natural Language Understanding with Explanations
as Latent Variables [146.83882632854485]
We develop a framework for interpretable natural language understanding that requires only a small set of human annotated explanations for training.
Our framework treats natural language explanations as latent variables that model the underlying reasoning process of a neural model.
arXiv Detail & Related papers (2020-10-24T02:05:56Z) - Consumer-Driven Explanations for Machine Learning Decisions: An
Empirical Study of Robustness [35.520178007455556]
This paper builds upon an alternative consumer-driven approach called TED that asks for explanations to be provided in training data, along with target labels.
Experiments are conducted to investigate some practical considerations with TED, including its performance with different classification algorithms.
arXiv Detail & Related papers (2020-01-13T18:45:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.