Improving Machine Learning Robustness via Adversarial Training
- URL: http://arxiv.org/abs/2309.12593v1
- Date: Fri, 22 Sep 2023 02:43:04 GMT
- Title: Improving Machine Learning Robustness via Adversarial Training
- Authors: Long Dang, Thushari Hapuarachchi, Kaiqi Xiong, Jing Lin
- Abstract summary: We investigate ML robustness using adversarial training in centralized and decentralized environments.
In the centralized environment, we achieve a test accuracy of 65.41% and 83.0% when classifying adversarial examples.
In the decentralized environment, we study Federated learning (FL) robustness by using adversarial training with independent and identically distributed (IID) and non-IID data.
- Score: 3.7942983866014073
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As Machine Learning (ML) is increasingly used in solving various tasks in
real-world applications, it is crucial to ensure that ML algorithms are robust
to any potential worst-case noises, adversarial attacks, and highly unusual
situations when they are designed. Studying ML robustness will significantly
help in the design of ML algorithms. In this paper, we investigate ML
robustness using adversarial training in centralized and decentralized
environments, where ML training and testing are conducted in one or multiple
computers. In the centralized environment, we achieve a test accuracy of 65.41%
and 83.0% when classifying adversarial examples generated by Fast Gradient Sign
Method and DeepFool, respectively. Comparing to existing studies, these results
demonstrate an improvement of 18.41% for FGSM and 47% for DeepFool. In the
decentralized environment, we study Federated learning (FL) robustness by using
adversarial training with independent and identically distributed (IID) and
non-IID data, respectively, where CIFAR-10 is used in this research. In the IID
data case, our experimental results demonstrate that we can achieve such a
robust accuracy that it is comparable to the one obtained in the centralized
environment. Moreover, in the non-IID data case, the natural accuracy drops
from 66.23% to 57.82%, and the robust accuracy decreases by 25% and 23.4% in
C&W and Projected Gradient Descent (PGD) attacks, compared to the IID data
case, respectively. We further propose an IID data-sharing approach, which
allows for increasing the natural accuracy to 85.04% and the robust accuracy
from 57% to 72% in C&W attacks and from 59% to 67% in PGD attacks.
Related papers
- Uncertainty Aware Human-machine Collaboration in Camouflaged Object Detection [12.2304109417748]
A key step toward developing trustworthy COD systems is the estimation and effective utilization of uncertainty.
In this work, we propose a human-machine collaboration framework for classifying the presence of camouflaged objects.
Our approach introduces a multiview backbone to estimate uncertainty in CV model predictions, utilizes this uncertainty during training to improve efficiency, and defers low-confidence cases to human evaluation.
arXiv Detail & Related papers (2025-02-12T13:05:24Z) - INTACT: Inducing Noise Tolerance through Adversarial Curriculum Training for LiDAR-based Safety-Critical Perception and Autonomy [0.4124847249415279]
We present a novel framework designed to enhance the robustness of deep neural networks (DNNs) against noisy LiDAR data.
IntACT combines meta-learning with adversarial curriculum training (ACT) to address challenges posed by data corruption and sparsity in 3D point clouds.
IntACT's effectiveness is demonstrated through comprehensive evaluations on object detection, tracking, and classification benchmarks.
arXiv Detail & Related papers (2025-02-04T00:02:16Z) - On the Robustness of Distributed Machine Learning against Transfer Attacks [1.0787328610467801]
No prior work has examined the combined robustness stemming from distributing both the learning and the inference process.
We show that properly distributed ML instantiations achieve across-the-board improvements in accuracy-robustness tradeoffs against state-of-the-art transfer-based attacks.
arXiv Detail & Related papers (2024-12-18T17:27:17Z) - Uncertainty Aware Learning for Language Model Alignment [97.36361196793929]
We propose uncertainty-aware learning (UAL) to improve the model alignment of different task scenarios.
We implement UAL in a simple fashion -- adaptively setting the label smoothing value of training according to the uncertainty of individual samples.
Experiments on widely used benchmarks demonstrate that our UAL significantly and consistently outperforms standard supervised fine-tuning.
arXiv Detail & Related papers (2024-06-07T11:37:45Z) - Data-Free Hard-Label Robustness Stealing Attack [67.41281050467889]
We introduce a novel Data-Free Hard-Label Robustness Stealing (DFHL-RS) attack in this paper.
It enables the stealing of both model accuracy and robustness by simply querying hard labels of the target model.
Our method achieves a clean accuracy of 77.86% and a robust accuracy of 39.51% against AutoAttack.
arXiv Detail & Related papers (2023-12-10T16:14:02Z) - Boosting Facial Expression Recognition by A Semi-Supervised Progressive
Teacher [54.50747989860957]
We propose a semi-supervised learning algorithm named Progressive Teacher (PT) to utilize reliable FER datasets as well as large-scale unlabeled expression images for effective training.
Experiments on widely-used databases RAF-DB and FERPlus validate the effectiveness of our method, which achieves state-of-the-art performance with accuracy of 89.57% on RAF-DB.
arXiv Detail & Related papers (2022-05-28T07:47:53Z) - Precision-Weighted Federated Learning [1.8160945635344528]
We propose a novel algorithm that takes into account the variance of the gradients when computing the weighted average of the parameters of models trained in a Federated Learning setting.
Our method was evaluated using standard image classification datasets with two different data partitioning strategies (IID/non-IID) to measure the performance and speed of our method in resource-constrained environments.
arXiv Detail & Related papers (2021-07-20T17:17:10Z) - Towards Reducing Labeling Cost in Deep Object Detection [61.010693873330446]
We propose a unified framework for active learning, that considers both the uncertainty and the robustness of the detector.
Our method is able to pseudo-label the very confident predictions, suppressing a potential distribution drift.
arXiv Detail & Related papers (2021-06-22T16:53:09Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - To be Robust or to be Fair: Towards Fairness in Adversarial Training [83.42241071662897]
We find that adversarial training algorithms tend to introduce severe disparity of accuracy and robustness between different groups of data.
We propose a Fair-Robust-Learning (FRL) framework to mitigate this unfairness problem when doing adversarial defenses.
arXiv Detail & Related papers (2020-10-13T02:21:54Z) - Stable Adversarial Learning under Distributional Shifts [46.98655899839784]
Machine learning algorithms with empirical risk minimization are vulnerable under distributional shifts.
We propose Stable Adversarial Learning (SAL) algorithm that leverages heterogeneous data sources to construct a more practical uncertainty set.
arXiv Detail & Related papers (2020-06-08T08:42:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.