Change Detection for Local Explainability in Evolving Data Streams
- URL: http://arxiv.org/abs/2209.02764v1
- Date: Tue, 6 Sep 2022 18:38:34 GMT
- Title: Change Detection for Local Explainability in Evolving Data Streams
- Authors: Johannes Haug, Alexander Braun, Stefan Z\"urn, Gjergji Kasneci
- Abstract summary: Local feature attribution methods have become a popular technique for post-hoc and model-agnostic explanations.
It is often unclear how local attributions behave in realistic, constantly evolving settings such as streaming and online applications.
We present CDLEEDS, a flexible and model-agnostic framework for detecting local change and concept drift.
- Score: 72.4816340552763
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As complex machine learning models are increasingly used in sensitive
applications like banking, trading or credit scoring, there is a growing demand
for reliable explanation mechanisms. Local feature attribution methods have
become a popular technique for post-hoc and model-agnostic explanations.
However, attribution methods typically assume a stationary environment in which
the predictive model has been trained and remains stable. As a result, it is
often unclear how local attributions behave in realistic, constantly evolving
settings such as streaming and online applications. In this paper, we discuss
the impact of temporal change on local feature attributions. In particular, we
show that local attributions can become obsolete each time the predictive model
is updated or concept drift alters the data generating distribution.
Consequently, local feature attributions in data streams provide high
explanatory power only when combined with a mechanism that allows us to detect
and respond to local changes over time. To this end, we present CDLEEDS, a
flexible and model-agnostic framework for detecting local change and concept
drift. CDLEEDS serves as an intuitive extension of attribution-based
explanation techniques to identify outdated local attributions and enable more
targeted recalculations. In experiments, we also show that the proposed
framework can reliably detect both local and global concept drift. Accordingly,
our work contributes to a more meaningful and robust explainability in online
machine learning.
Related papers
- SPARTAN: A Sparse Transformer Learning Local Causation [63.29645501232935]
Causal structures play a central role in world models that flexibly adapt to changes in the environment.
We present the SPARse TrANsformer World model (SPARTAN), a Transformer-based world model that learns local causal structures between entities in a scene.
By applying sparsity regularisation on the attention pattern between object-factored tokens, SPARTAN identifies sparse local causal models that accurately predict future object states.
arXiv Detail & Related papers (2024-11-11T11:42:48Z) - Robust Domain Generalisation with Causal Invariant Bayesian Neural Networks [9.999199798941424]
We propose a Bayesian neural architecture that disentangles the learning of the the data distribution from the inference process mechanisms.
We show theoretically and experimentally that our model approximates reasoning under causal interventions.
arXiv Detail & Related papers (2024-10-08T20:38:05Z) - MASALA: Model-Agnostic Surrogate Explanations by Locality Adaptation [3.587367153279351]
Existing local Explainable AI (XAI) methods select a region of the input space in the vicinity of a given input instance, for which they approximate the behaviour of a model using a simpler and more interpretable surrogate model.
We propose a novel method, MASALA, for generating explanations, which automatically determines the appropriate local region of impactful model behaviour for each individual instance being explained.
arXiv Detail & Related papers (2024-08-19T15:26:45Z) - Learning Where to Look: Self-supervised Viewpoint Selection for Active Localization using Geometrical Information [68.10033984296247]
This paper explores the domain of active localization, emphasizing the importance of viewpoint selection to enhance localization accuracy.
Our contributions involve using a data-driven approach with a simple architecture designed for real-time operation, a self-supervised data training method, and the capability to consistently integrate our map into a planning framework tailored for real-world robotics applications.
arXiv Detail & Related papers (2024-07-22T12:32:09Z) - FedACK: Federated Adversarial Contrastive Knowledge Distillation for
Cross-Lingual and Cross-Model Social Bot Detection [22.979415040695557]
FedACK is a new adversarial contrastive knowledge distillation framework for social bot detection.
A global generator is used to extract the knowledge of global data distribution and distill it into each client's local model.
Experiments demonstrate that FedACK outperforms the state-of-the-art approaches in terms of accuracy, communication efficiency, and feature space consistency.
arXiv Detail & Related papers (2023-03-10T03:10:08Z) - Adaptive Local-Component-aware Graph Convolutional Network for One-shot
Skeleton-based Action Recognition [54.23513799338309]
We present an Adaptive Local-Component-aware Graph Convolutional Network for skeleton-based action recognition.
Our method provides a stronger representation than the global embedding and helps our model reach state-of-the-art.
arXiv Detail & Related papers (2022-09-21T02:33:07Z) - LCTR: On Awakening the Local Continuity of Transformer for Weakly
Supervised Object Localization [38.376238216214524]
Weakly supervised object localization (WSOL) aims to learn object localizer solely by using image-level labels.
We propose a novel framework built upon the transformer, termed LCTR, which targets at enhancing the local perception capability of global features.
arXiv Detail & Related papers (2021-12-10T01:48:40Z) - Real-Time Decentralized knowledge Transfer at the Edge [6.732931634492992]
Transferring knowledge in a selective decentralized approach enables models to retain their local insights.
We propose a method based on knowledge distillation for pairwise knowledge transfer pipelines from models trained on non-i.i.d. data.
Our experiments show knowledge transfer using our model outperforms standard methods in a real-time transfer scenario.
arXiv Detail & Related papers (2020-11-11T18:26:57Z) - Unsupervised Metric Relocalization Using Transform Consistency Loss [66.19479868638925]
Training networks to perform metric relocalization traditionally requires accurate image correspondences.
We propose a self-supervised solution, which exploits a key insight: localizing a query image within a map should yield the same absolute pose, regardless of the reference image used for registration.
We evaluate our framework on synthetic and real-world data, showing our approach outperforms other supervised methods when a limited amount of ground-truth information is available.
arXiv Detail & Related papers (2020-11-01T19:24:27Z) - Explainable Deep Classification Models for Domain Generalization [94.43131722655617]
Explanations are defined as regions of visual evidence upon which a deep classification network makes a decision.
Our training strategy enforces a periodic saliency-based feedback to encourage the model to focus on the image regions that directly correspond to the ground-truth object.
arXiv Detail & Related papers (2020-03-13T22:22:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.