Structural Extensions of Basis Pursuit: Guarantees on Adversarial
Robustness
- URL: http://arxiv.org/abs/2205.08955v1
- Date: Thu, 5 May 2022 09:12:07 GMT
- Title: Structural Extensions of Basis Pursuit: Guarantees on Adversarial
Robustness
- Authors: D\'avid Szeghy, Mahmoud Aslan, \'Aron F\'othi, Bal\'azs M\'esz\'aros,
Zolt\'an \'Ad\'am Milacski, Andr\'as L\H{o}rincz
- Abstract summary: We prove that the stability of BP holds upon the following generalizations.
We introduce classification based on the $ell$ norms of the groups and show numerically that it can be accurate and offers considerable speedups.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While deep neural networks are sensitive to adversarial noise, sparse coding
using the Basis Pursuit (BP) method is robust against such attacks, including
its multi-layer extensions. We prove that the stability theorem of BP holds
upon the following generalizations: (i) the regularization procedure can be
separated into disjoint groups with different weights, (ii) neurons or full
layers may form groups, and (iii) the regularizer takes various generalized
forms of the $\ell_1$ norm. This result provides the proof for the
architectural generalizations of Cazenavette et al. (2021), including (iv) an
approximation of the complete architecture as a shallow sparse coding network.
Due to this approximation, we settled to experimenting with shallow networks
and studied their robustness against the Iterative Fast Gradient Sign Method on
a synthetic dataset and MNIST. We introduce classification based on the
$\ell_2$ norms of the groups and show numerically that it can be accurate and
offers considerable speedups. In this family, linear transformer shows the best
performance. Based on the theoretical results and the numerical simulations, we
highlight numerical matters that may improve performance further.
Related papers
- Learning Identifiable Structures Helps Avoid Bias in DNN-based Supervised Causal Learning [56.22841701016295]
Supervised Causal Learning (SCL) is an emerging paradigm in this field.
Existing Deep Neural Network (DNN)-based methods commonly adopt the "Node-Edge approach"
arXiv Detail & Related papers (2025-02-15T19:10:35Z) - Low coordinate degree algorithms II: Categorical signals and generalized stochastic block models [2.4889993472438383]
We study when low coordinate degree functions can test for the presence of categorical structure in high-dimensional data.
This complements the first paper of this series, which studied the power of LCDF in testing for continuous structure.
arXiv Detail & Related papers (2024-12-30T18:34:36Z) - On the Power of Adaptive Weighted Aggregation in Heterogeneous Federated Learning and Beyond [37.894835756324454]
Federated averaging (FedAvg) is the most fundamental algorithm in Federated learning (FL)
Recent empirical results show that FedAvg can perform well in many real-world heterogeneous tasks.
We present a simple and effective FedAvg variant termed FedAWARE.
arXiv Detail & Related papers (2023-10-04T10:15:57Z) - An Intermediate-level Attack Framework on The Basis of Linear Regression [89.85593878754571]
This paper substantially extends our work published at ECCV, in which an intermediate-level attack was proposed to improve the transferability of some baseline adversarial examples.
We advocate to establish a direct linear mapping from the intermediate-level discrepancies (between adversarial features and benign features) to classification prediction loss of the adversarial example.
We show that 1) a variety of linear regression models can all be considered in order to establish the mapping, 2) the magnitude of the finally obtained intermediate-level discrepancy is linearly correlated with adversarial transferability, and 3) further boost of the performance can be achieved by performing multiple runs of the baseline attack with
arXiv Detail & Related papers (2022-03-21T03:54:53Z) - The Sample Complexity of One-Hidden-Layer Neural Networks [57.6421258363243]
We study a class of scalar-valued one-hidden-layer networks, and inputs bounded in Euclidean norm.
We prove that controlling the spectral norm of the hidden layer weight matrix is insufficient to get uniform convergence guarantees.
We analyze two important settings where a mere spectral norm control turns out to be sufficient.
arXiv Detail & Related papers (2022-02-13T07:12:02Z) - Controlling the Complexity and Lipschitz Constant improves polynomial
nets [55.121200972539114]
We derive new complexity bounds for the set of Coupled CP-Decomposition (CCP) and Nested Coupled CP-decomposition (NCP) models of Polynomial Nets.
We propose a principled regularization scheme that we evaluate experimentally in six datasets and show that it improves the accuracy as well as the robustness of the models to adversarial perturbations.
arXiv Detail & Related papers (2022-02-10T14:54:29Z) - Efficient Semi-Implicit Variational Inference [65.07058307271329]
We propose an efficient and scalable semi-implicit extrapolational (SIVI)
Our method maps SIVI's evidence to a rigorous inference of lower gradient values.
arXiv Detail & Related papers (2021-01-15T11:39:09Z) - DessiLBI: Exploring Structural Sparsity of Deep Networks via
Differential Inclusion Paths [45.947140164621096]
We propose a new approach based on differential inclusions of inverse scale spaces.
We show that DessiLBI unveils "winning tickets" in early epochs.
arXiv Detail & Related papers (2020-07-04T04:40:16Z) - Understanding Generalization in Deep Learning via Tensor Methods [53.808840694241]
We advance the understanding of the relations between the network's architecture and its generalizability from the compression perspective.
We propose a series of intuitive, data-dependent and easily-measurable properties that tightly characterize the compressibility and generalizability of neural networks.
arXiv Detail & Related papers (2020-01-14T22:26:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.