Robust Explanation Constraints for Neural Networks
- URL: http://arxiv.org/abs/2212.08507v1
- Date: Fri, 16 Dec 2022 14:40:25 GMT
- Title: Robust Explanation Constraints for Neural Networks
- Authors: Matthew Wicker, Juyeon Heo, Luca Costabello, Adrian Weller
- Abstract summary: Post-hoc explanation methods used with the intent of neural networks are sometimes said to help engender trust in their outputs.
Our training method is the only method able to learn neural networks with insights about robustness tested across all six tested networks.
- Score: 33.14373978947437
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Post-hoc explanation methods are used with the intent of providing insights
about neural networks and are sometimes said to help engender trust in their
outputs. However, popular explanations methods have been found to be fragile to
minor perturbations of input features or model parameters. Relying on
constraint relaxation techniques from non-convex optimization, we develop a
method that upper-bounds the largest change an adversary can make to a
gradient-based explanation via bounded manipulation of either the input
features or model parameters. By propagating a compact input or parameter set
as symbolic intervals through the forwards and backwards computations of the
neural network we can formally certify the robustness of gradient-based
explanations. Our bounds are differentiable, hence we can incorporate provable
explanation robustness into neural network training. Empirically, our method
surpasses the robustness provided by previous heuristic approaches. We find
that our training method is the only method able to learn neural networks with
certificates of explanation robustness across all six datasets tested.
Related papers
- A Rate-Distortion View of Uncertainty Quantification [36.85921945174863]
In supervised learning, understanding an input's proximity to the training data can help a model decide whether it has sufficient evidence for reaching a reliable prediction.
We introduce Distance Aware Bottleneck (DAB), a new method for enriching deep neural networks with this property.
arXiv Detail & Related papers (2024-06-16T01:33:22Z) - Manipulating Feature Visualizations with Gradient Slingshots [54.31109240020007]
We introduce a novel method for manipulating Feature Visualization (FV) without significantly impacting the model's decision-making process.
We evaluate the effectiveness of our method on several neural network models and demonstrate its capabilities to hide the functionality of arbitrarily chosen neurons.
arXiv Detail & Related papers (2024-01-11T18:57:17Z) - Adversarial Attacks on the Interpretation of Neuron Activation
Maximization [70.5472799454224]
Activation-maximization approaches are used to interpret and analyze trained deep-learning models.
In this work, we consider the concept of an adversary manipulating a model for the purpose of deceiving the interpretation.
arXiv Detail & Related papers (2023-06-12T19:54:33Z) - Simple initialization and parametrization of sinusoidal networks via
their kernel bandwidth [92.25666446274188]
sinusoidal neural networks with activations have been proposed as an alternative to networks with traditional activation functions.
We first propose a simplified version of such sinusoidal neural networks, which allows both for easier practical implementation and simpler theoretical analysis.
We then analyze the behavior of these networks from the neural tangent kernel perspective and demonstrate that their kernel approximates a low-pass filter with an adjustable bandwidth.
arXiv Detail & Related papers (2022-11-26T07:41:48Z) - Towards Practical Control of Singular Values of Convolutional Layers [65.25070864775793]
Convolutional neural networks (CNNs) are easy to train, but their essential properties, such as generalization error and adversarial robustness, are hard to control.
Recent research demonstrated that singular values of convolutional layers significantly affect such elusive properties.
We offer a principled approach to alleviating constraints of the prior art at the expense of an insignificant reduction in layer expressivity.
arXiv Detail & Related papers (2022-11-24T19:09:44Z) - Look beyond labels: Incorporating functional summary information in
Bayesian neural networks [11.874130244353253]
We present a simple approach to incorporate summary information about the predicted probability.
The available summary information is incorporated as augmented data and modeled with a Dirichlet process.
We show how the method can inform the model about task difficulty or class imbalance.
arXiv Detail & Related papers (2022-07-04T07:06:45Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Robust Explainability: A Tutorial on Gradient-Based Attribution Methods
for Deep Neural Networks [1.5854438418597576]
We present gradient-based interpretability methods for explaining decisions of deep neural networks.
We discuss the role that adversarial robustness plays in having meaningful explanations.
We conclude with the future directions for research in the area at the convergence of robustness and explainability.
arXiv Detail & Related papers (2021-07-23T18:06:29Z) - Practical Convex Formulation of Robust One-hidden-layer Neural Network
Training [12.71266194474117]
We show that the training of a one-hidden-layer, scalar-output fully-connected ReLULU neural network can be reformulated as a finite-dimensional convex program.
We derive a convex optimization approach to efficiently solve the "adversarial training" problem.
Our method can be applied to binary classification and regression, and provides an alternative to the current adversarial training methods.
arXiv Detail & Related papers (2021-05-25T22:06:27Z) - Towards Robust Explanations for Deep Neural Networks [5.735035463793008]
We develop a unified theoretical framework for deriving bounds on the maximal manipulability of a model.
We present three different techniques to boost robustness against manipulation.
arXiv Detail & Related papers (2020-12-18T18:29:09Z) - Forgetting Outside the Box: Scrubbing Deep Networks of Information
Accessible from Input-Output Observations [143.3053365553897]
We describe a procedure for removing dependency on a cohort of training data from a trained deep network.
We introduce a new bound on how much information can be extracted per query about the forgotten cohort.
We exploit the connections between the activation and weight dynamics of a DNN inspired by Neural Tangent Kernels to compute the information in the activations.
arXiv Detail & Related papers (2020-03-05T23:17:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.