Focused Adversarial Attacks
- URL: http://arxiv.org/abs/2205.09624v1
- Date: Thu, 19 May 2022 15:38:23 GMT
- Title: Focused Adversarial Attacks
- Authors: Thomas Cilloni and Charles Walter and Charles Fleming
- Abstract summary: Recent advances in machine learning show that neural models are vulnerable to minimally perturbed inputs, or adversarial examples.
We propose to use a very limited subset of a model's learned manifold to compute adversarial examples.
Our textitFocused Adversarial Attacks (FA) algorithm identifies a small subset of sensitive regions to perform gradient-based adversarial attacks.
- Score: 1.607104211283248
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in machine learning show that neural models are vulnerable to
minimally perturbed inputs, or adversarial examples. Adversarial algorithms are
optimization problems that minimize the accuracy of ML models by perturbing
inputs, often using a model's loss function to craft such perturbations.
State-of-the-art object detection models are characterized by very large output
manifolds due to the number of possible locations and sizes of objects in an
image. This leads to their outputs being sparse and optimization problems that
use them incur a lot of unnecessary computation.
We propose to use a very limited subset of a model's learned manifold to
compute adversarial examples. Our \textit{Focused Adversarial Attacks} (FA)
algorithm identifies a small subset of sensitive regions to perform
gradient-based adversarial attacks. FA is significantly faster than other
gradient-based attacks when a model's manifold is sparsely activated. Also, its
perturbations are more efficient than other methods under the same perturbation
constraints. We evaluate FA on the COCO 2017 and Pascal VOC 2007 detection
datasets.
Related papers
- Feature Attenuation of Defective Representation Can Resolve Incomplete Masking on Anomaly Detection [1.0358639819750703]
In unsupervised anomaly detection (UAD) research, it is necessary to develop a computationally efficient and scalable solution.
We revisit the reconstruction-by-inpainting approach and rethink to improve it by analyzing strengths and weaknesses.
We propose Feature Attenuation of Defective Representation (FADeR) that only employs two layers which attenuates feature information of anomaly reconstruction.
arXiv Detail & Related papers (2024-07-05T15:44:53Z) - AnomalyLLM: Few-shot Anomaly Edge Detection for Dynamic Graphs using Large Language Models [19.36513465638031]
AnomalyLLM is an in-context learning framework that integrates the information of a few labeled samples to achieve few-shot anomaly detection.
Experiments on four datasets reveal that AnomalyLLM can not only significantly improve the performance of few-shot anomaly detection, but also achieve superior results on new anomalies without any update of model parameters.
arXiv Detail & Related papers (2024-05-13T10:37:50Z) - Deep Networks as Denoising Algorithms: Sample-Efficient Learning of
Diffusion Models in High-Dimensional Graphical Models [22.353510613540564]
We investigate the approximation efficiency of score functions by deep neural networks in generative modeling.
We observe score functions can often be well-approximated in graphical models through variational inference denoising algorithms.
We provide an efficient sample complexity bound for diffusion-based generative modeling when the score function is learned by deep neural networks.
arXiv Detail & Related papers (2023-09-20T15:51:10Z) - Dynamic Tiling: A Model-Agnostic, Adaptive, Scalable, and
Inference-Data-Centric Approach for Efficient and Accurate Small Object
Detection [3.8332251841430423]
Dynamic Tiling is a model-agnostic, adaptive, and scalable approach for small object detection.
Our method effectively resolves fragmented objects, improves detection accuracy, and minimizes computational overhead.
Overall, Dynamic Tiling outperforms existing model-agnostic uniform cropping methods.
arXiv Detail & Related papers (2023-09-20T05:25:12Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Meta Adversarial Perturbations [66.43754467275967]
We show the existence of a meta adversarial perturbation (MAP)
MAP causes natural images to be misclassified with high probability after being updated through only a one-step gradient ascent update.
We show that these perturbations are not only image-agnostic, but also model-agnostic, as a single perturbation generalizes well across unseen data points and different neural network architectures.
arXiv Detail & Related papers (2021-11-19T16:01:45Z) - Generalization of Neural Combinatorial Solvers Through the Lens of
Adversarial Robustness [68.97830259849086]
Most datasets only capture a simpler subproblem and likely suffer from spurious features.
We study adversarial robustness - a local generalization property - to reveal hard, model-specific instances and spurious features.
Unlike in other applications, where perturbation models are designed around subjective notions of imperceptibility, our perturbation models are efficient and sound.
Surprisingly, with such perturbations, a sufficiently expressive neural solver does not suffer from the limitations of the accuracy-robustness trade-off common in supervised learning.
arXiv Detail & Related papers (2021-10-21T07:28:11Z) - A Deep Marginal-Contrastive Defense against Adversarial Attacks on 1D
Models [3.9962751777898955]
Deep learning algorithms have been recently targeted by attackers due to their vulnerability.
Non-continuous deep models are still not robust against adversarial attacks.
We propose a novel objective/loss function, which enforces the features to lie under a specified margin to facilitate their prediction.
arXiv Detail & Related papers (2020-12-08T20:51:43Z) - Attribute-Guided Adversarial Training for Robustness to Natural
Perturbations [64.35805267250682]
We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space.
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.
arXiv Detail & Related papers (2020-12-03T10:17:30Z) - Shaping Deep Feature Space towards Gaussian Mixture for Visual
Classification [74.48695037007306]
We propose a Gaussian mixture (GM) loss function for deep neural networks for visual classification.
With a classification margin and a likelihood regularization, the GM loss facilitates both high classification performance and accurate modeling of the feature distribution.
The proposed model can be implemented easily and efficiently without using extra trainable parameters.
arXiv Detail & Related papers (2020-11-18T03:32:27Z) - A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
and Learning [122.49765136434353]
We present an effective method, called Hamiltonian Monte Carlo with Accumulated Momentum (HMCAM), aiming to generate a sequence of adversarial examples.
We also propose a new generative method called Contrastive Adversarial Training (CAT), which approaches equilibrium distribution of adversarial examples.
Both quantitative and qualitative analysis on several natural image datasets and practical systems have confirmed the superiority of the proposed algorithm.
arXiv Detail & Related papers (2020-10-15T16:07:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.