Knowledge Enhanced Machine Learning Pipeline against Diverse Adversarial
Attacks
- URL: http://arxiv.org/abs/2106.06235v1
- Date: Fri, 11 Jun 2021 08:37:53 GMT
- Title: Knowledge Enhanced Machine Learning Pipeline against Diverse Adversarial
Attacks
- Authors: Nezihe Merve G\"urel, Xiangyu Qi, Luka Rimanic, Ce Zhang, Bo Li
- Abstract summary: We propose a Knowledge Enhanced Machine Learning Pipeline (KEMLP) to integrate domain knowledge into a graphical model.
In particular, we develop KEMLP by integrating a diverse set of weak auxiliary models based on their logical relationships to the main DNN model.
We show that compared with adversarial training and other baselines, KEMLP achieves higher robustness against physical attacks, $mathcalL_p$ bounded attacks, unforeseen attacks, and natural corruptions.
- Score: 10.913817907524454
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite the great successes achieved by deep neural networks (DNNs), recent
studies show that they are vulnerable against adversarial examples, which aim
to mislead DNNs by adding small adversarial perturbations. Several defenses
have been proposed against such attacks, while many of them have been
adaptively attacked. In this work, we aim to enhance the ML robustness from a
different perspective by leveraging domain knowledge: We propose a Knowledge
Enhanced Machine Learning Pipeline (KEMLP) to integrate domain knowledge (i.e.,
logic relationships among different predictions) into a probabilistic graphical
model via first-order logic rules. In particular, we develop KEMLP by
integrating a diverse set of weak auxiliary models based on their logical
relationships to the main DNN model that performs the target task.
Theoretically, we provide convergence results and prove that, under mild
conditions, the prediction of KEMLP is more robust than that of the main DNN
model. Empirically, we take road sign recognition as an example and leverage
the relationships between road signs and their shapes and contents as domain
knowledge. We show that compared with adversarial training and other baselines,
KEMLP achieves higher robustness against physical attacks, $\mathcal{L}_p$
bounded attacks, unforeseen attacks, and natural corruptions under both
whitebox and blackbox settings, while still maintaining high clean accuracy.
Related papers
- Improving the Robustness of Quantized Deep Neural Networks to White-Box
Attacks using Stochastic Quantization and Information-Theoretic Ensemble
Training [1.6098666134798774]
Most real-world applications that employ deep neural networks (DNNs) quantize them to low precision to reduce the compute needs.
We present a method to improve the robustness of quantized DNNs to white-box adversarial attacks.
arXiv Detail & Related papers (2023-11-30T17:15:58Z) - Exploring the Vulnerabilities of Machine Learning and Quantum Machine
Learning to Adversarial Attacks using a Malware Dataset: A Comparative
Analysis [0.0]
Machine learning (ML) and quantum machine learning (QML) have shown remarkable potential in tackling complex problems.
Their susceptibility to adversarial attacks raises concerns when deploying these systems in security sensitive applications.
We present a comparative analysis of the vulnerability of ML and QNN models to adversarial attacks using a malware dataset.
arXiv Detail & Related papers (2023-05-31T06:31:42Z) - CARE: Certifiably Robust Learning with Reasoning via Variational
Inference [26.210129662748862]
We propose a certifiably robust learning with reasoning pipeline (CARE)
CARE achieves significantly higher certified robustness compared with the state-of-the-art baselines.
We additionally conducted different ablation studies to demonstrate the empirical robustness of CARE and the effectiveness of different knowledge integration.
arXiv Detail & Related papers (2022-09-12T07:15:52Z) - Latent Boundary-guided Adversarial Training [61.43040235982727]
Adrial training is proved to be the most effective strategy that injects adversarial examples into model training.
We propose a novel adversarial training framework called LAtent bounDary-guided aDvErsarial tRaining.
arXiv Detail & Related papers (2022-06-08T07:40:55Z) - Exploring Architectural Ingredients of Adversarially Robust Deep Neural
Networks [98.21130211336964]
Deep neural networks (DNNs) are known to be vulnerable to adversarial attacks.
In this paper, we investigate the impact of network width and depth on the robustness of adversarially trained DNNs.
arXiv Detail & Related papers (2021-10-07T23:13:33Z) - KNN-BERT: Fine-Tuning Pre-Trained Models with KNN Classifier [61.063988689601416]
Pre-trained models are widely used in fine-tuning downstream tasks with linear classifiers optimized by the cross-entropy loss.
These problems can be improved by learning representations that focus on similarities in the same class and contradictions when making predictions.
We introduce the KNearest Neighbors in pre-trained model fine-tuning tasks in this paper.
arXiv Detail & Related papers (2021-10-06T06:17:05Z) - Neural Architecture Dilation for Adversarial Robustness [56.18555072877193]
A shortcoming of convolutional neural networks is that they are vulnerable to adversarial attacks.
This paper aims to improve the adversarial robustness of the backbone CNNs that have a satisfactory accuracy.
Under a minimal computational overhead, a dilation architecture is expected to be friendly with the standard performance of the backbone CNN.
arXiv Detail & Related papers (2021-08-16T03:58:00Z) - Policy Smoothing for Provably Robust Reinforcement Learning [109.90239627115336]
We study the provable robustness of reinforcement learning against norm-bounded adversarial perturbations of the inputs.
We generate certificates that guarantee that the total reward obtained by the smoothed policy will not fall below a certain threshold under a norm-bounded adversarial of perturbation the input.
arXiv Detail & Related papers (2021-06-21T21:42:08Z) - Learning and Certification under Instance-targeted Poisoning [49.55596073963654]
We study PAC learnability and certification under instance-targeted poisoning attacks.
We show that when the budget of the adversary scales sublinearly with the sample complexity, PAC learnability and certification are achievable.
We empirically study the robustness of K nearest neighbour, logistic regression, multi-layer perceptron, and convolutional neural network on real data sets.
arXiv Detail & Related papers (2021-05-18T17:48:15Z) - Robustness of Bayesian Neural Networks to Gradient-Based Attacks [9.966113038850946]
Vulnerability to adversarial attacks is one of the principal hurdles to the adoption of deep learning in safety-critical applications.
We show that vulnerability to gradient-based attacks arises as a result of degeneracy in the data distribution.
We demonstrate that in the limit BNN posteriors are robust to gradient-based adversarial attacks.
arXiv Detail & Related papers (2020-02-11T13:03:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.