Robust Adversarial Defense by Tensor Factorization
- URL: http://arxiv.org/abs/2309.01077v1
- Date: Sun, 3 Sep 2023 04:51:44 GMT
- Title: Robust Adversarial Defense by Tensor Factorization
- Authors: Manish Bhattarai, Mehmet Cagri Kaymak, Ryan Barron, Ben Nebgen, Kim
Rasmussen, Boian Alexandrov
- Abstract summary: This study integrates the tensorization of input data with low-rank decomposition and tensorization of NN parameters to enhance adversarial defense.
The proposed approach demonstrates significant defense capabilities, maintaining robust accuracy even when subjected to the strongest known auto-attacks.
- Score: 1.2954493726326113
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As machine learning techniques become increasingly prevalent in data
analysis, the threat of adversarial attacks has surged, necessitating robust
defense mechanisms. Among these defenses, methods exploiting low-rank
approximations for input data preprocessing and neural network (NN) parameter
factorization have shown potential. Our work advances this field further by
integrating the tensorization of input data with low-rank decomposition and
tensorization of NN parameters to enhance adversarial defense. The proposed
approach demonstrates significant defense capabilities, maintaining robust
accuracy even when subjected to the strongest known auto-attacks. Evaluations
against leading-edge robust performance benchmarks reveal that our results not
only hold their ground against the best defensive methods available but also
exceed all current defense strategies that rely on tensor factorizations. This
study underscores the potential of integrating tensorization and low-rank
decomposition as a robust defense against adversarial attacks in machine
learning.
Related papers
- Evaluating Adversarial Robustness: A Comparison Of FGSM, Carlini-Wagner Attacks, And The Role of Distillation as Defense Mechanism [0.0]
The study explores adversarial attacks specifically targeted at Deep Neural Networks (DNNs) utilized for image classification.
The research focuses on comprehending the ramifications of two prominent attack methodologies: the Fast Gradient Sign Method (FGSM) and the Carlini-Wagner (CW) approach.
The study proposes the robustness of defensive distillation as a defense mechanism to counter FGSM and CW attacks.
arXiv Detail & Related papers (2024-04-05T17:51:58Z) - Improving the Robustness of Object Detection and Classification AI models against Adversarial Patch Attacks [2.963101656293054]
We analyze attack techniques and propose a robust defense approach.
We successfully reduce model confidence by over 20% using adversarial patch attacks that exploit object shape, texture and position.
Our inpainting defense approach significantly enhances model resilience, achieving high accuracy and reliable localization despite the adversarial attacks.
arXiv Detail & Related papers (2024-03-04T13:32:48Z) - Doubly Robust Instance-Reweighted Adversarial Training [107.40683655362285]
We propose a novel doubly-robust instance reweighted adversarial framework.
Our importance weights are obtained by optimizing the KL-divergence regularized loss function.
Our proposed approach outperforms related state-of-the-art baseline methods in terms of average robust performance.
arXiv Detail & Related papers (2023-08-01T06:16:18Z) - Enhancing Adversarial Robustness via Score-Based Optimization [22.87882885963586]
Adversarial attacks have the potential to mislead deep neural network classifiers by introducing slight perturbations.
We introduce a novel adversarial defense scheme named ScoreOpt, which optimize adversarial samples at test-time.
Our experimental results demonstrate that our approach outperforms existing adversarial defenses in terms of both performance and robustness speed.
arXiv Detail & Related papers (2023-07-10T03:59:42Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - Searching for an Effective Defender: Benchmarking Defense against
Adversarial Word Substitution [83.84968082791444]
Deep neural networks are vulnerable to intentionally crafted adversarial examples.
Various methods have been proposed to defend against adversarial word-substitution attacks for neural NLP models.
arXiv Detail & Related papers (2021-08-29T08:11:36Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.