iPDP: On Partial Dependence Plots in Dynamic Modeling Scenarios
- URL: http://arxiv.org/abs/2306.07775v1
- Date: Tue, 13 Jun 2023 13:56:56 GMT
- Title: iPDP: On Partial Dependence Plots in Dynamic Modeling Scenarios
- Authors: Maximilian Muschalik, Fabian Fumagalli, Rohit Jagtani, Barbara Hammer,
Eyke H\"ullermeier
- Abstract summary: Post-hoc explanation techniques such as the well-established partial dependence plot ( PDP) are used in explainable artificial intelligence (XAI)
We propose a novel model-agnostic XAI framework that extends on the PDP to extract time-dependent feature effects in non-stationary learning environments.
- Score: 7.772337176239137
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Post-hoc explanation techniques such as the well-established partial
dependence plot (PDP), which investigates feature dependencies, are used in
explainable artificial intelligence (XAI) to understand black-box machine
learning models. While many real-world applications require dynamic models that
constantly adapt over time and react to changes in the underlying distribution,
XAI, so far, has primarily considered static learning environments, where
models are trained in a batch mode and remain unchanged. We thus propose a
novel model-agnostic XAI framework called incremental PDP (iPDP) that extends
on the PDP to extract time-dependent feature effects in non-stationary learning
environments. We formally analyze iPDP and show that it approximates a
time-dependent variant of the PDP that properly reacts to real and virtual
concept drift. The time-sensitivity of iPDP is controlled by a single smoothing
parameter, which directly corresponds to the variance and the approximation
error of iPDP in a static learning environment. We illustrate the efficacy of
iPDP by showcasing an example application for drift detection and conducting
multiple experiments on real-world and synthetic data sets and streams.
Related papers
- Physics Informed Deep Learning for Strain Gradient Continuum Plasticity [0.0]
We use a space-time discretization based on physics informed deep learning to approximate solutions of rate-dependent strain gradient plasticity models.
Taking inspiration from physics informed neural networks, we modify the loss function of a PIDL model in several novel ways.
We show how PIDL methods could address the computational challenges posed by strain plasticity models.
arXiv Detail & Related papers (2024-08-13T06:02:05Z) - Synthesizing Multimodal Electronic Health Records via Predictive Diffusion Models [69.06149482021071]
We propose a novel EHR data generation model called EHRPD.
It is a diffusion-based model designed to predict the next visit based on the current one while also incorporating time interval estimation.
We conduct experiments on two public datasets and evaluate EHRPD from fidelity, privacy, and utility perspectives.
arXiv Detail & Related papers (2024-06-20T02:20:23Z) - Synthetic location trajectory generation using categorical diffusion
models [50.809683239937584]
Diffusion models (DPMs) have rapidly evolved to be one of the predominant generative models for the simulation of synthetic data.
We propose using DPMs for the generation of synthetic individual location trajectories (ILTs) which are sequences of variables representing physical locations visited by individuals.
arXiv Detail & Related papers (2024-02-19T15:57:39Z) - Learning Space-Time Continuous Neural PDEs from Partially Observed
States [13.01244901400942]
We introduce a grid-independent model learning partial differential equations (PDEs) from noisy and partial observations on irregular grids.
We propose a space-time continuous latent neural PDE model with an efficient probabilistic framework and a novel design encoder for improved data efficiency and grid independence.
arXiv Detail & Related papers (2023-07-09T06:53:59Z) - FaDIn: Fast Discretized Inference for Hawkes Processes with General
Parametric Kernels [82.53569355337586]
This work offers an efficient solution to temporal point processes inference using general parametric kernels with finite support.
The method's effectiveness is evaluated by modeling the occurrence of stimuli-induced patterns from brain signals recorded with magnetoencephalography (MEG)
Results show that the proposed approach leads to an improved estimation of pattern latency than the state-of-the-art.
arXiv Detail & Related papers (2022-10-10T12:35:02Z) - PAC Reinforcement Learning for Predictive State Representations [60.00237613646686]
We study online Reinforcement Learning (RL) in partially observable dynamical systems.
We focus on the Predictive State Representations (PSRs) model, which is an expressive model that captures other well-known models.
We develop a novel model-based algorithm for PSRs that can learn a near optimal policy in sample complexity scalingly.
arXiv Detail & Related papers (2022-07-12T17:57:17Z) - Efficient Remote Photoplethysmography with Temporal Derivative Modules
and Time-Shift Invariant Loss [6.381149074212898]
We present a lightweight neural model for remote heart rate estimation.
We focus on the efficient-temporal learning of facial photoplethysmography.
Compared to existing models, our approach shows competitive accuracy with a much lower number of parameters and lower computational cost.
arXiv Detail & Related papers (2022-03-21T11:08:06Z) - Relating the Partial Dependence Plot and Permutation Feature Importance
to the Data Generating Process [1.3782922287772585]
Partial dependence plots and permutation feature importance (PFI) are often used as interpretation methods.
We formalize PD and PFI as statistical estimators of ground truth estimands rooted in the data generating process.
We show that PD and PFI estimates deviate from this ground truth due to statistical biases, model variance and Monte Carlo approximation errors.
arXiv Detail & Related papers (2021-09-03T10:50:41Z) - DiffPD: Differentiable Projective Dynamics with Contact [65.88720481593118]
We present DiffPD, an efficient differentiable soft-body simulator with implicit time integration.
We evaluate the performance of DiffPD and observe a speedup of 4-19 times compared to the standard Newton's method in various applications.
arXiv Detail & Related papers (2021-01-15T00:13:33Z) - Online Reinforcement Learning Control by Direct Heuristic Dynamic
Programming: from Time-Driven to Event-Driven [80.94390916562179]
Time-driven learning refers to the machine learning method that updates parameters in a prediction model continuously as new data arrives.
It is desirable to prevent the time-driven dHDP from updating due to insignificant system event such as noise.
We show how the event-driven dHDP algorithm works in comparison to the original time-driven dHDP.
arXiv Detail & Related papers (2020-06-16T05:51:25Z) - Correlation-aware Unsupervised Change-point Detection via Graph Neural
Networks [37.68666658785003]
Change-point detection (CPD) aims to detect abrupt changes over time series data.
Existing CPD methods either ignore the dependency structures entirely or rely on the (unrealistic) assumption that the correlation structures are static over time.
We propose a Correlation-aware Dynamics Model for CPD, which explicitly models the correlation structure and dynamics of variables.
arXiv Detail & Related papers (2020-04-24T18:28:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.