Towards More Robust Interpretation via Local Gradient Alignment
- URL: http://arxiv.org/abs/2211.15900v1
- Date: Tue, 29 Nov 2022 03:38:28 GMT
- Title: Towards More Robust Interpretation via Local Gradient Alignment
- Authors: Sunghwan Joo, Seokhyeon Jeong, Juyeon Heo, Adrian Weller and Taesup
Moon
- Abstract summary: We show that for every non-negative homogeneous neural network, a naive $ell$-robust criterion for gradients is textitnot normalization invariant.
We propose to combine both $ell$ and cosine distance-based criteria as regularization terms to leverage the advantages of both in aligning the local gradient.
We experimentally show that models trained with our method produce much more robust interpretations on CIFAR-10 and ImageNet-100.
- Score: 37.464250451280336
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Neural network interpretation methods, particularly feature attribution
methods, are known to be fragile with respect to adversarial input
perturbations. To address this, several methods for enhancing the local
smoothness of the gradient while training have been proposed for attaining
\textit{robust} feature attributions. However, the lack of considering the
normalization of the attributions, which is essential in their visualizations,
has been an obstacle to understanding and improving the robustness of feature
attribution methods. In this paper, we provide new insights by taking such
normalization into account. First, we show that for every non-negative
homogeneous neural network, a naive $\ell_2$-robust criterion for gradients is
\textit{not} normalization invariant, which means that two functions with the
same normalized gradient can have different values. Second, we formulate a
normalization invariant cosine distance-based criterion and derive its upper
bound, which gives insight for why simply minimizing the Hessian norm at the
input, as has been done in previous work, is not sufficient for attaining
robust feature attribution. Finally, we propose to combine both $\ell_2$ and
cosine distance-based criteria as regularization terms to leverage the
advantages of both in aligning the local gradient. As a result, we
experimentally show that models trained with our method produce much more
robust interpretations on CIFAR-10 and ImageNet-100 without significantly
hurting the accuracy, compared to the recent baselines. To the best of our
knowledge, this is the first work to verify the robustness of interpretation on
a larger-scale dataset beyond CIFAR-10, thanks to the computational efficiency
of our method.
Related papers
- Unlearning-based Neural Interpretations [51.99182464831169]
We show that current baselines defined using static functions are biased, fragile and manipulable.
We propose UNI to compute an (un)learnable, debiased and adaptive baseline by perturbing the input towards an unlearning direction of steepest ascent.
arXiv Detail & Related papers (2024-10-10T16:02:39Z) - Directional Smoothness and Gradient Methods: Convergence and Adaptivity [16.779513676120096]
We develop new sub-optimality bounds for gradient descent that depend on the conditioning of the objective along the path of optimization.
Key to our proofs is directional smoothness, a measure of gradient variation that we use to develop upper-bounds on the objective.
We prove that the Polyak step-size and normalized GD obtain fast, path-dependent rates despite using no knowledge of the directional smoothness.
arXiv Detail & Related papers (2024-03-06T22:24:05Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Neural Gradient Learning and Optimization for Oriented Point Normal
Estimation [53.611206368815125]
We propose a deep learning approach to learn gradient vectors with consistent orientation from 3D point clouds for normal estimation.
We learn an angular distance field based on local plane geometry to refine the coarse gradient vectors.
Our method efficiently conducts global gradient approximation while achieving better accuracy and ability generalization of local feature description.
arXiv Detail & Related papers (2023-09-17T08:35:11Z) - Sobolev Space Regularised Pre Density Models [51.558848491038916]
We propose a new approach to non-parametric density estimation that is based on regularizing a Sobolev norm of the density.
This method is statistically consistent, and makes the inductive validation model clear and consistent.
arXiv Detail & Related papers (2023-07-25T18:47:53Z) - Clip21: Error Feedback for Gradient Clipping [8.979288425347702]
We design Clip21 -- the first provably effective and practically useful feedback mechanism for distributed methods.
Our method converges faster in practice than competing methods.
arXiv Detail & Related papers (2023-05-30T10:41:42Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Explicit Regularization of Stochastic Gradient Methods through Duality [9.131027490864938]
We propose randomized Dykstra-style algorithms based on randomized dual coordinate ascent.
For accelerated coordinate descent, we obtain a new algorithm that has better convergence properties than existing gradient methods in the interpolating regime.
arXiv Detail & Related papers (2020-03-30T20:44:56Z) - Towards Better Understanding of Adaptive Gradient Algorithms in
Generative Adversarial Nets [71.05306664267832]
Adaptive algorithms perform gradient updates using the history of gradients and are ubiquitous in training deep neural networks.
In this paper we analyze a variant of OptimisticOA algorithm for nonconcave minmax problems.
Our experiments show that adaptive GAN non-adaptive gradient algorithms can be observed empirically.
arXiv Detail & Related papers (2019-12-26T22:10:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.