On the Security Risks of AutoML
- URL: http://arxiv.org/abs/2110.06018v1
- Date: Tue, 12 Oct 2021 14:04:15 GMT
- Title: On the Security Risks of AutoML
- Authors: Ren Pang, Zhaohan Xi, Shouling Ji, Xiapu Luo, Ting Wang
- Abstract summary: Neural Architecture Search (NAS) is an emerging machine learning paradigm that automatically searches for models tailored to given tasks.
We show that compared with their manually designed counterparts, NAS-generated models tend to suffer greater vulnerability to various malicious attacks.
We discuss potential remedies to mitigate such drawbacks, including increasing cell depth and suppressing skip connects.
- Score: 38.03918108363182
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural Architecture Search (NAS) represents an emerging machine learning (ML)
paradigm that automatically searches for models tailored to given tasks, which
greatly simplifies the development of ML systems and propels the trend of ML
democratization. Yet, little is known about the potential security risks
incurred by NAS, which is concerning given the increasing use of NAS-generated
models in critical domains.
This work represents a solid initial step towards bridging the gap. Through
an extensive empirical study of 10 popular NAS methods, we show that compared
with their manually designed counterparts, NAS-generated models tend to suffer
greater vulnerability to various malicious attacks (e.g., adversarial evasion,
model poisoning, and functionality stealing). Further, with both empirical and
analytical evidence, we provide possible explanations for such phenomena: given
the prohibitive search space and training cost, most NAS methods favor models
that converge fast at early training stages; this preference results in
architectural properties associated with attack vulnerability (e.g., high loss
smoothness and low gradient variance). Our findings not only reveal the
relationships between model characteristics and attack vulnerability but also
suggest the inherent connections underlying different attacks. Finally, we
discuss potential remedies to mitigate such drawbacks, including increasing
cell depth and suppressing skip connects, which lead to several promising
research directions.
Related papers
- Transferable Adversarial Attacks on SAM and Its Downstream Models [87.23908485521439]
This paper explores the feasibility of adversarial attacking various downstream models fine-tuned from the segment anything model (SAM)
To enhance the effectiveness of the adversarial attack towards models fine-tuned on unknown datasets, we propose a universal meta-initialization (UMI) algorithm.
arXiv Detail & Related papers (2024-10-26T15:04:04Z) - CALoR: Towards Comprehensive Model Inversion Defense [43.2642796582236]
Model Inversion Attacks (MIAs) aim at recovering privacy-sensitive training data from the knowledge encoded in released machine learning models.
Recent advances in the MIA field have significantly enhanced the attack performance under multiple scenarios.
We propose a robust defense mechanism, integrating Confidence Adaptation and Low-Rank compression.
arXiv Detail & Related papers (2024-10-08T08:44:01Z) - Unlearning Backdoor Threats: Enhancing Backdoor Defense in Multimodal Contrastive Learning via Local Token Unlearning [49.242828934501986]
Multimodal contrastive learning has emerged as a powerful paradigm for building high-quality features.
backdoor attacks subtly embed malicious behaviors within the model during training.
We introduce an innovative token-based localized forgetting training regime.
arXiv Detail & Related papers (2024-03-24T18:33:15Z) - Adversarial Evasion Attacks Practicality in Networks: Testing the Impact of Dynamic Learning [1.6574413179773757]
adversarial attacks aim to trick ML models into producing faulty predictions.
adversarial attacks can compromise ML-based NIDSs.
Our experiments indicate that continuous re-training, even without adversarial training, can reduce the effectiveness of adversarial attacks.
arXiv Detail & Related papers (2023-06-08T18:32:08Z) - The Dark Side of AutoML: Towards Architectural Backdoor Search [49.16544351888333]
EVAS is a new attack that leverages NAS to find neural architectures with inherent backdoors and exploits such vulnerability using input-aware triggers.
EVAS features high evasiveness, transferability, and robustness, thereby expanding the adversary's design spectrum.
This work raises concerns about the current practice of NAS and points to potential directions to develop effective countermeasures.
arXiv Detail & Related papers (2022-10-21T18:13:23Z) - RelaxLoss: Defending Membership Inference Attacks without Losing Utility [68.48117818874155]
We propose a novel training framework based on a relaxed loss with a more achievable learning target.
RelaxLoss is applicable to any classification model with added benefits of easy implementation and negligible overhead.
Our approach consistently outperforms state-of-the-art defense mechanisms in terms of resilience against MIAs.
arXiv Detail & Related papers (2022-07-12T19:34:47Z) - ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine
Learning Models [64.03398193325572]
Inference attacks against Machine Learning (ML) models allow adversaries to learn about training data, model parameters, etc.
We concentrate on four attacks - namely, membership inference, model inversion, attribute inference, and model stealing.
Our analysis relies on a modular re-usable software, ML-Doctor, which enables ML model owners to assess the risks of deploying their models.
arXiv Detail & Related papers (2021-02-04T11:35:13Z) - A Deep Marginal-Contrastive Defense against Adversarial Attacks on 1D
Models [3.9962751777898955]
Deep learning algorithms have been recently targeted by attackers due to their vulnerability.
Non-continuous deep models are still not robust against adversarial attacks.
We propose a novel objective/loss function, which enforces the features to lie under a specified margin to facilitate their prediction.
arXiv Detail & Related papers (2020-12-08T20:51:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.