Validation, Robustness, and Accuracy of Perturbation-Based Sensitivity
Analysis Methods for Time-Series Deep Learning Models
- URL: http://arxiv.org/abs/2401.16521v1
- Date: Mon, 29 Jan 2024 19:51:50 GMT
- Title: Validation, Robustness, and Accuracy of Perturbation-Based Sensitivity
Analysis Methods for Time-Series Deep Learning Models
- Authors: Zhengguang Wang
- Abstract summary: This work undertakes studies to evaluate Interpretability Methods for Time-Series Deep Learning.
My work will investigate perturbation-based sensitivity Analysis methods on modern Transformer models to benchmark their performances.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work undertakes studies to evaluate Interpretability Methods for
Time-Series Deep Learning. Sensitivity analysis assesses how input changes
affect the output, constituting a key component of interpretation. Among the
post-hoc interpretation methods such as back-propagation, perturbation, and
approximation, my work will investigate perturbation-based sensitivity Analysis
methods on modern Transformer models to benchmark their performances.
Specifically, my work answers three research questions: 1) Do different
sensitivity analysis (SA) methods yield comparable outputs and attribute
importance rankings? 2) Using the same sensitivity analysis method, do
different Deep Learning (DL) models impact the output of the sensitivity
analysis? 3) How well do the results from sensitivity analysis methods align
with the ground truth?
Related papers
- Contrastive Factor Analysis [70.02770079785559]
This paper introduces a novel Contrastive Factor Analysis framework.
It aims to leverage factor analysis's advantageous properties within the realm of contrastive learning.
To further leverage the interpretability properties of non-negative factor analysis, it is extended to a non-negative version.
arXiv Detail & Related papers (2024-07-31T16:52:00Z) - Explainability of Machine Learning Models under Missing Data [2.880748930766428]
Missing data is a prevalent issue that can significantly impair model performance and interpretability.
This paper briefly summarizes the development of the field of missing data and investigates the effects of various imputation methods on the calculation of Shapley values.
arXiv Detail & Related papers (2024-06-29T11:31:09Z) - How are Prompts Different in Terms of Sensitivity? [50.67313477651395]
We present a comprehensive prompt analysis based on the sensitivity of a function.
We use gradient-based saliency scores to empirically demonstrate how different prompts affect the relevance of input tokens to the output.
We introduce sensitivity-aware decoding which incorporates sensitivity estimation as a penalty term in the standard greedy decoding.
arXiv Detail & Related papers (2023-11-13T10:52:01Z) - Towards stable real-world equation discovery with assessing
differentiating quality influence [52.2980614912553]
We propose alternatives to the commonly used finite differences-based method.
We evaluate these methods in terms of applicability to problems, similar to the real ones, and their ability to ensure the convergence of equation discovery algorithms.
arXiv Detail & Related papers (2023-11-09T23:32:06Z) - Sensitivity-Aware Amortized Bayesian Inference [8.753065246797561]
Sensitivity analyses reveal the influence of various modeling choices on the outcomes of statistical analyses.
We propose sensitivity-aware amortized Bayesian inference (SA-ABI), a multifaceted approach to integrate sensitivity analyses into simulation-based inference with neural networks.
We demonstrate the effectiveness of our method in applied modeling problems, ranging from disease outbreak dynamics and global warming thresholds to human decision-making.
arXiv Detail & Related papers (2023-10-17T10:14:10Z) - Sensitivity Analysis of High-Dimensional Models with Correlated Inputs [0.0]
The sensitivity of correlated parameters can not only differ in magnitude, but even the sign of the derivative-based index can be inverted.
We demonstrate that the sensitivity of the correlated parameters can not only differ in magnitude, but even the sign of the derivative-based index can be inverted.
arXiv Detail & Related papers (2023-05-31T14:48:54Z) - Metric Tools for Sensitivity Analysis with Applications to Neural
Networks [0.0]
Explainable Artificial Intelligence (XAI) aims to provide interpretations for predictions made by Machine Learning models.
In this paper, a theoretical framework is proposed to study sensitivities of ML models using metric techniques.
A complete family of new quantitative metrics called $alpha$-curves is extracted.
arXiv Detail & Related papers (2023-05-03T18:10:21Z) - Causal Intervention Improves Implicit Sentiment Analysis [67.43379729099121]
We propose a causal intervention model for Implicit Sentiment Analysis using Instrumental Variable (ISAIV)
We first review sentiment analysis from a causal perspective and analyze the confounders existing in this task.
Then, we introduce an instrumental variable to eliminate the confounding causal effects, thus extracting the pure causal effect between sentence and sentiment.
arXiv Detail & Related papers (2022-08-19T13:17:57Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Interpretable Multi-dataset Evaluation for Named Entity Recognition [110.64368106131062]
We present a general methodology for interpretable evaluation for the named entity recognition (NER) task.
The proposed evaluation method enables us to interpret the differences in models and datasets, as well as the interplay between them.
By making our analysis tool available, we make it easy for future researchers to run similar analyses and drive progress in this area.
arXiv Detail & Related papers (2020-11-13T10:53:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.