Baseline Computation for Attribution Methods Based on Interpolated
Inputs
- URL: http://arxiv.org/abs/2204.06120v1
- Date: Wed, 13 Apr 2022 00:11:45 GMT
- Title: Baseline Computation for Attribution Methods Based on Interpolated
Inputs
- Authors: Miguel Lerma, Mirtha Lucas
- Abstract summary: We discuss a way to find a well behaved baseline for attribution methods that work by feeding a neural network with a sequence of interpolated inputs between two given inputs.
Then, we test it with our novel Riemann-Stieltjes Integrated Gradient-weighted Class Activation Mapping (RSI-Grad-CAM) attribution method.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We discuss a way to find a well behaved baseline for attribution methods that
work by feeding a neural network with a sequence of interpolated inputs between
two given inputs. Then, we test it with our novel Riemann-Stieltjes Integrated
Gradient-weighted Class Activation Mapping (RSI-Grad-CAM) attribution method.
Related papers
- B-PL-PINN: Stabilizing PINN Training with Bayesian Pseudo Labeling [9.503773054285558]
Training physics-informed neural networks (PINNs) for forward problems often suffers from severe convergence issues.<n>We suggest replacing the ensemble by a Bayesian PINN, and consensus by an evaluation of the PINN's posterior variance.<n>Our experiments show that this mathematically principled approach outperforms the ensemble on a set of benchmark problems.
arXiv Detail & Related papers (2025-07-02T13:44:31Z) - Active Learning Classification from a Signal Separation Perspective [0.0]
We propose a novel clustering and classification framework inspired by the principles of signal separation.
We validate our method on real-world hyperspectral datasets Salinas and Indian Pines.
arXiv Detail & Related papers (2025-02-23T03:47:03Z) - Calabi-Yau metrics through Grassmannian learning and Donaldson's algorithm [5.158605878911773]
We present a novel approach to obtaining Ricci-flat approximations to K"ahler metrics.
We use gradient descent on the Grassmannian manifold to identify an efficient subspace of sections.
We implement our methods on the Dwork family of threefolds, commenting on the behaviour at different points in moduli space.
arXiv Detail & Related papers (2024-10-15T05:08:43Z) - Convolutional autoencoder-based multimodal one-class classification [80.52334952912808]
One-class classification refers to approaches of learning using data from a single class only.
We propose a deep learning one-class classification method suitable for multimodal data.
arXiv Detail & Related papers (2023-09-25T12:31:18Z) - Learning Dictionaries from Physical-Based Interpolation for Water
Network Leak Localization [1.747820331822631]
This article presents a leak localization methodology based on state estimation and learning.
The proposed technique exploits the physics of the interconnections between hydraulic heads of neighboring nodes in water distribution networks.
arXiv Detail & Related papers (2023-04-21T13:07:08Z) - Classified as unknown: A novel Bayesian neural network [0.0]
We develop a new efficient Bayesian learning algorithm for fully connected neural networks.
We generalize the algorithm for a single perceptron for binary classification in citeH to multi-layer perceptrons for multi-class classification.
arXiv Detail & Related papers (2023-01-31T04:27:09Z) - Interpolation-based Correlation Reduction Network for Semi-Supervised
Graph Learning [49.94816548023729]
We propose a novel graph contrastive learning method, termed Interpolation-based Correlation Reduction Network (ICRN)
In our method, we improve the discriminative capability of the latent feature by enlarging the margin of decision boundaries.
By combining the two settings, we extract rich supervision information from both the abundant unlabeled nodes and the rare yet valuable labeled nodes for discnative representation learning.
arXiv Detail & Related papers (2022-06-06T14:26:34Z) - Boundary Attributions Provide Normal (Vector) Explanations [27.20904776964045]
Boundary Attribution (BA) is a new explanation method to address this question.
BA involves computing normal vectors of the local decision boundaries for the target input.
We prove two theorems for ReLU networks: BA of randomized smoothed networks or robustly trained networks is much closer to non-boundary attribution methods than that in standard networks.
arXiv Detail & Related papers (2021-03-20T22:36:39Z) - Scalable Bayesian Inverse Reinforcement Learning [93.27920030279586]
We introduce Approximate Variational Reward Imitation Learning (AVRIL)
Our method addresses the ill-posed nature of the inverse reinforcement learning problem.
Applying our method to real medical data alongside classic control simulations, we demonstrate Bayesian reward inference in environments beyond the scope of current methods.
arXiv Detail & Related papers (2021-02-12T12:32:02Z) - Model Fusion with Kullback--Leibler Divergence [58.20269014662046]
We propose a method to fuse posterior distributions learned from heterogeneous datasets.
Our algorithm relies on a mean field assumption for both the fused model and the individual dataset posteriors.
arXiv Detail & Related papers (2020-07-13T03:27:45Z) - Tractable Approximate Gaussian Inference for Bayesian Neural Networks [1.933681537640272]
We propose an analytical method for performing tractable approximate Gaussian inference (TAGI) in Bayesian neural networks.
The method has a computational complexity of $mathcalO(n)$ with respect to the number of parameters $n$, and the tests performed on regression and classification benchmarks confirm that, for a same network architecture, it matches the performance of existing methods relying on gradient backpropagation.
arXiv Detail & Related papers (2020-04-20T13:37:08Z) - DEPARA: Deep Attribution Graph for Deep Knowledge Transferability [91.06106524522237]
We propose the DEeP Attribution gRAph (DEPARA) to investigate the transferability of knowledge learned from PR-DNNs.
In DEPARA, nodes correspond to the inputs and are represented by their vectorized attribution maps with regards to the outputs of the PR-DNN.
The knowledge transferability of two PR-DNNs is measured by the similarity of their corresponding DEPARAs.
arXiv Detail & Related papers (2020-03-17T02:07:50Z) - Interpolation Technique to Speed Up Gradients Propagation in Neural ODEs [71.26657499537366]
We propose a simple literature-based method for the efficient approximation of gradients in neural ODE models.
We compare it with the reverse dynamic method to train neural ODEs on classification, density estimation, and inference approximation tasks.
arXiv Detail & Related papers (2020-03-11T13:15:57Z) - Patch-level Neighborhood Interpolation: A General and Effective
Graph-based Regularization Strategy [77.34280933613226]
We propose a general regularizer called textbfPatch-level Neighborhood Interpolation(Pani) that conducts a non-local representation in the computation of networks.
Our proposal explicitly constructs patch-level graphs in different layers and then linearly interpolates neighborhood patch features, serving as a general and effective regularization strategy.
arXiv Detail & Related papers (2019-11-21T06:31:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.