Time to Focus: A Comprehensive Benchmark Using Time Series Attribution
Methods
- URL: http://arxiv.org/abs/2202.03759v1
- Date: Tue, 8 Feb 2022 10:06:13 GMT
- Title: Time to Focus: A Comprehensive Benchmark Using Time Series Attribution
Methods
- Authors: Dominique Mercier, Jwalin Bhatt, Andreas Dengel, Sheraz Ahmed
- Abstract summary: The paper focuses on time series analysis and benchmark several state-of-the-art attribution methods.
The presented experiments involve gradient-based and perturbation-based attribution methods.
The findings accentuate that choosing the best-suited attribution method is strongly correlated with the desired use case.
- Score: 4.9449660544238085
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the last decade neural network have made huge impact both in industry and
research due to their ability to extract meaningful features from imprecise or
complex data, and by achieving super human performance in several domains.
However, due to the lack of transparency the use of these networks is hampered
in the areas with safety critical areas. In safety-critical areas, this is
necessary by law. Recently several methods have been proposed to uncover this
black box by providing interpreation of predictions made by these models. The
paper focuses on time series analysis and benchmark several state-of-the-art
attribution methods which compute explanations for convolutional classifiers.
The presented experiments involve gradient-based and perturbation-based
attribution methods. A detailed analysis shows that perturbation-based
approaches are superior concerning the Sensitivity and occlusion game. These
methods tend to produce explanations with higher continuity. Contrarily, the
gradient-based techniques are superb in runtime and Infidelity. In addition, a
validation the dependence of the methods on the trained model, feasible
application domains, and individual characteristics is attached. The findings
accentuate that choosing the best-suited attribution method is strongly
correlated with the desired use case. Neither category of attribution methods
nor a single approach has shown outstanding performance across all aspects.
Related papers
- Toward Understanding the Disagreement Problem in Neural Network Feature Attribution [0.8057006406834466]
neural networks have demonstrated their remarkable ability to discern intricate patterns and relationships from raw data.
Understanding the inner workings of these black box models remains challenging, yet crucial for high-stake decisions.
Our work addresses this confusion by investigating the explanations' fundamental and distributional behavior.
arXiv Detail & Related papers (2024-04-17T12:45:59Z) - MFABA: A More Faithful and Accelerated Boundary-based Attribution Method
for Deep Neural Networks [69.28125286491502]
We introduce MFABA, an attribution algorithm that adheres to axioms.
Results demonstrate its superiority by achieving over 101.5142 times faster speed than the state-of-the-art attribution algorithms.
arXiv Detail & Related papers (2023-12-21T07:48:15Z) - Transcending Forgery Specificity with Latent Space Augmentation for Generalizable Deepfake Detection [57.646582245834324]
We propose a simple yet effective deepfake detector called LSDA.
It is based on a idea: representations with a wider variety of forgeries should be able to learn a more generalizable decision boundary.
We show that our proposed method is surprisingly effective and transcends state-of-the-art detectors across several widely used benchmarks.
arXiv Detail & Related papers (2023-11-19T09:41:10Z) - Implicit Variational Inference for High-Dimensional Posteriors [7.924706533725115]
In variational inference, the benefits of Bayesian models rely on accurately capturing the true posterior distribution.
We propose using neural samplers that specify implicit distributions, which are well-suited for approximating complex multimodal and correlated posteriors.
Our approach introduces novel bounds for approximate inference using implicit distributions by locally linearising the neural sampler.
arXiv Detail & Related papers (2023-10-10T14:06:56Z) - Better Understanding Differences in Attribution Methods via Systematic Evaluations [57.35035463793008]
Post-hoc attribution methods have been proposed to identify image regions most influential to the models' decisions.
We propose three novel evaluation schemes to more reliably measure the faithfulness of those methods.
We use these evaluation schemes to study strengths and shortcomings of some widely used attribution methods over a wide range of models.
arXiv Detail & Related papers (2023-03-21T14:24:58Z) - Pixel-wise Gradient Uncertainty for Convolutional Neural Networks
applied to Out-of-Distribution Segmentation [0.43512163406552007]
We present a method for obtaining uncertainty scores from pixel-wise loss gradients which can be computed efficiently during inference.
Our experiments show the ability of our method to identify wrong pixel classifications and to estimate prediction quality at negligible computational overhead.
arXiv Detail & Related papers (2023-03-13T08:37:59Z) - Towards Better Understanding Attribution Methods [77.1487219861185]
Post-hoc attribution methods have been proposed to identify image regions most influential to the models' decisions.
We propose three novel evaluation schemes to more reliably measure the faithfulness of those methods.
We also propose a post-processing smoothing step that significantly improves the performance of some attribution methods.
arXiv Detail & Related papers (2022-05-20T20:50:17Z) - Don't Lie to Me! Robust and Efficient Explainability with Verified
Perturbation Analysis [6.15738282053772]
We introduce EVA -- the first explainability method guarantee to have an exhaustive exploration of a perturbation space.
We leverage the beneficial properties of verified perturbation analysis to efficiently characterize the input variables that are most likely to drive the model decision.
arXiv Detail & Related papers (2022-02-15T21:13:55Z) - Discriminative Attribution from Counterfactuals [64.94009515033984]
We present a method for neural network interpretability by combining feature attribution with counterfactual explanations.
We show that this method can be used to quantitatively evaluate the performance of feature attribution methods in an objective manner.
arXiv Detail & Related papers (2021-09-28T00:53:34Z) - An Effective Baseline for Robustness to Distributional Shift [5.627346969563955]
Refraining from confidently predicting when faced with categories of inputs different from those seen during training is an important requirement for the safe deployment of deep learning systems.
We present a simple, but highly effective approach to deal with out-of-distribution detection that uses the principle of abstention.
arXiv Detail & Related papers (2021-05-15T00:46:11Z) - Cross-domain Object Detection through Coarse-to-Fine Feature Adaptation [62.29076080124199]
This paper proposes a novel coarse-to-fine feature adaptation approach to cross-domain object detection.
At the coarse-grained stage, foreground regions are extracted by adopting the attention mechanism, and aligned according to their marginal distributions.
At the fine-grained stage, we conduct conditional distribution alignment of foregrounds by minimizing the distance of global prototypes with the same category but from different domains.
arXiv Detail & Related papers (2020-03-23T13:40:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.