Detection of Uncertainty in Exceedance of Threshold (DUET): An
Adversarial Patch Localizer
- URL: http://arxiv.org/abs/2303.10291v1
- Date: Sat, 18 Mar 2023 00:07:26 GMT
- Title: Detection of Uncertainty in Exceedance of Threshold (DUET): An
Adversarial Patch Localizer
- Authors: Terence Jie Chua, Wenhan Yu, Jun Zhao
- Abstract summary: Development of defenses against physical world attacks such as adversarial patches is gaining traction within the research community.
We contribute to the field of adversarial patch detection by introducing an uncertainty-based adversarial patch localizer.
This algorithm provides a framework to ascertain confidence in the adversarial patch localization.
- Score: 8.513938423514636
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Development of defenses against physical world attacks such as adversarial
patches is gaining traction within the research community. We contribute to the
field of adversarial patch detection by introducing an uncertainty-based
adversarial patch localizer which localizes adversarial patch on an image,
permitting post-processing patch-avoidance or patch-reconstruction. We quantify
our prediction uncertainties with the development of \textit{\textbf{D}etection
of \textbf{U}ncertainties in the \textbf{E}xceedance of \textbf{T}hreshold}
(DUET) algorithm. This algorithm provides a framework to ascertain confidence
in the adversarial patch localization, which is essential for safety-sensitive
applications such as self-driving cars and medical imaging. We conducted
experiments on localizing adversarial patches and found our proposed DUET model
outperforms baseline models. We then conduct further analyses on our choice of
model priors and the adoption of Bayesian Neural Networks in different layers
within our model architecture. We found that isometric gaussian priors in
Bayesian Neural Networks are suitable for patch localization tasks and the
presence of Bayesian layers in the earlier neural network blocks facilitates
top-end localization performance, while Bayesian layers added in the later
neural network blocks contribute to better model generalization. We then
propose two different well-performing models to tackle different use cases.
Related papers
- Multi-Layer Confidence Scoring for Detection of Out-of-Distribution Samples, Adversarial Attacks, and In-Distribution Misclassifications [2.4219039094115034]
We introduce Multi-Layer Analysis for Confidence Scoring (MACS)<n>We derive a score applicable for confidence estimation, detecting distributional shifts and adversarial attacks.<n>We achieve performances that surpass the state-of-the-art approaches in our experiments with the VGG16 and ViTb16 models.
arXiv Detail & Related papers (2025-12-22T15:25:10Z) - Lightweight CNN Model Hashing with Higher-Order Statistics and Chaotic Mapping for Piracy Detection and Tamper Localization [9.859893936091813]
Perceptual hashing has emerged as an effective approach for identifying pirated models.<n>We propose a lightweight CNN model hashing technique that integrates higher-order statistics (HOS) features with a chaotic mapping mechanism.
arXiv Detail & Related papers (2025-10-31T03:04:10Z) - Certified Neural Approximations of Nonlinear Dynamics [51.01318247729693]
In safety-critical contexts, the use of neural approximations requires formal bounds on their closeness to the underlying system.<n>We propose a novel, adaptive, and parallelizable verification method based on certified first-order models.
arXiv Detail & Related papers (2025-05-21T13:22:20Z) - Variational Bayesian Bow tie Neural Networks with Shrinkage [0.276240219662896]
We build a relaxed version of the standard feed-forward rectified neural network.
We employ Polya-Gamma data augmentation tricks to render a conditionally linear and Gaussian model.
We derive a variational inference algorithm that avoids distributional assumptions and independence across layers.
arXiv Detail & Related papers (2024-11-17T17:36:30Z) - Compositional Curvature Bounds for Deep Neural Networks [7.373617024876726]
A key challenge that threatens the widespread use of neural networks in safety-critical applications is their vulnerability to adversarial attacks.
We study the second-order behavior of continuously differentiable deep neural networks, focusing on robustness against adversarial perturbations.
We introduce a novel algorithm to analytically compute provable upper bounds on the second derivative of neural networks.
arXiv Detail & Related papers (2024-06-07T17:50:15Z) - DiAD: A Diffusion-based Framework for Multi-class Anomaly Detection [55.48770333927732]
We propose a Difusion-based Anomaly Detection (DiAD) framework for multi-class anomaly detection.
It consists of a pixel-space autoencoder, a latent-space Semantic-Guided (SG) network with a connection to the stable diffusion's denoising network, and a feature-space pre-trained feature extractor.
Experiments on MVTec-AD and VisA datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-12-11T18:38:28Z) - Defensive Tensorization [113.96183766922393]
We propose tensor defensiveization, an adversarial defence technique that leverages a latent high-order factorization of the network.
We empirically demonstrate the effectiveness of our approach on standard image classification benchmarks.
We validate the versatility of our approach across domains and low-precision architectures by considering an audio task and binary networks.
arXiv Detail & Related papers (2021-10-26T17:00:16Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - Evaluating the Robustness of Bayesian Neural Networks Against Different
Types of Attacks [2.599882743586164]
We show that a Bayesian neural network achieves significantly higher robustness against adversarial attacks generated against a deterministic neural network model.
The posterior can act as the safety precursor of ongoing malicious activities.
This advises on utilizing layers in building decision-making pipelines within a safety-critical domain.
arXiv Detail & Related papers (2021-06-17T03:18:59Z) - Identifying Untrustworthy Predictions in Neural Networks by Geometric
Gradient Analysis [4.148327474831389]
We propose a geometric gradient analysis (GGA) to improve the identification of untrustworthy predictions without retraining of a given model.
We demonstrate that the proposed method outperforms prior approaches in detecting OOD data and adversarial attacks.
arXiv Detail & Related papers (2021-02-24T10:49:02Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Towards Trustworthy Predictions from Deep Neural Networks with Fast
Adversarial Calibration [2.8935588665357077]
We propose an efficient yet general modelling approach for obtaining well-calibrated, trustworthy probabilities for samples obtained after a domain shift.
We introduce a new training strategy combining an entropy-encouraging loss term with an adversarial calibration loss term and demonstrate that this results in well-calibrated and technically trustworthy predictions.
arXiv Detail & Related papers (2020-12-20T13:39:29Z) - Attribute-Guided Adversarial Training for Robustness to Natural
Perturbations [64.35805267250682]
We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space.
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.
arXiv Detail & Related papers (2020-12-03T10:17:30Z) - PaDiM: a Patch Distribution Modeling Framework for Anomaly Detection and
Localization [64.39761523935613]
We present a new framework for Patch Distribution Modeling, PaDiM, to concurrently detect and localize anomalies in images.
PaDiM makes use of a pretrained convolutional neural network (CNN) for patch embedding.
It also exploits correlations between the different semantic levels of CNN to better localize anomalies.
arXiv Detail & Related papers (2020-11-17T17:29:18Z) - Ramifications of Approximate Posterior Inference for Bayesian Deep
Learning in Adversarial and Out-of-Distribution Settings [7.476901945542385]
We show that Bayesian deep learning models on certain occasions marginally outperform conventional neural networks.
Preliminary investigations indicate the potential inherent role of bias due to choices of initialisation, architecture or activation functions.
arXiv Detail & Related papers (2020-09-03T16:58:15Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.