Clustering Effect of (Linearized) Adversarial Robust Models
- URL: http://arxiv.org/abs/2111.12922v1
- Date: Thu, 25 Nov 2021 05:51:03 GMT
- Title: Clustering Effect of (Linearized) Adversarial Robust Models
- Authors: Yang Bai, Xin Yan, Yong Jiang, Shu-Tao Xia, Yisen Wang
- Abstract summary: We propose a novel understanding of adversarial robustness and apply it on more tasks including domain adaption and robustness boosting.
Experimental evaluations demonstrate the rationality and superiority of our proposed clustering strategy.
- Score: 60.25668525218051
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial robustness has received increasing attention along with the study
of adversarial examples. So far, existing works show that robust models not
only obtain robustness against various adversarial attacks but also boost the
performance in some downstream tasks. However, the underlying mechanism of
adversarial robustness is still not clear. In this paper, we interpret
adversarial robustness from the perspective of linear components, and find that
there exist some statistical properties for comprehensively robust models.
Specifically, robust models show obvious hierarchical clustering effect on
their linearized sub-networks, when removing or replacing all non-linear
components (e.g., batch normalization, maximum pooling, or activation layers).
Based on these observations, we propose a novel understanding of adversarial
robustness and apply it on more tasks including domain adaption and robustness
boosting. Experimental evaluations demonstrate the rationality and superiority
of our proposed clustering strategy.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.