Adversarial Robustness in Unsupervised Machine Learning: A Systematic
Review
- URL: http://arxiv.org/abs/2306.00687v1
- Date: Thu, 1 Jun 2023 13:59:32 GMT
- Title: Adversarial Robustness in Unsupervised Machine Learning: A Systematic
Review
- Authors: Mathias Lundteigen Mohus and Jinyue Li
- Abstract summary: This paper conducts a systematic literature review on the robustness of unsupervised learning.
Based on the results, we formulate a model on the properties of an attack on unsupervised learning.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As the adoption of machine learning models increases, ensuring robust models
against adversarial attacks is increasingly important. With unsupervised
machine learning gaining more attention, ensuring it is robust against attacks
is vital. This paper conducts a systematic literature review on the robustness
of unsupervised learning, collecting 86 papers. Our results show that most
research focuses on privacy attacks, which have effective defenses; however,
many attacks lack effective and general defensive measures. Based on the
results, we formulate a model on the properties of an attack on unsupervised
learning, contributing to future research by providing a model to use.
Related papers
- Designing an attack-defense game: how to increase robustness of
financial transaction models via a competition [69.08339915577206]
Given the escalating risks of malicious attacks in the finance sector, understanding adversarial strategies and robust defense mechanisms for machine learning models is critical.
We aim to investigate the current state and dynamics of adversarial attacks and defenses for neural network models that use sequential financial data as the input.
We have designed a competition that allows realistic and detailed investigation of problems in modern financial transaction data.
The participants compete directly against each other, so possible attacks and defenses are examined in close-to-real-life conditions.
arXiv Detail & Related papers (2023-08-22T12:53:09Z) - The Space of Adversarial Strategies [6.295859509997257]
Adversarial examples, inputs designed to induce worst-case behavior in machine learning models, have been extensively studied over the past decade.
We propose a systematic approach to characterize worst-case (i.e., optimal) adversaries.
arXiv Detail & Related papers (2022-09-09T20:53:11Z) - Membership-Doctor: Comprehensive Assessment of Membership Inference
Against Machine Learning Models [11.842337448801066]
We present a large-scale measurement of different membership inference attacks and defenses.
We find that some assumptions of the threat model, such as same-architecture and same-distribution between shadow and target models, are unnecessary.
We are also the first to execute attacks on the real-world data collected from the Internet, instead of laboratory datasets.
arXiv Detail & Related papers (2022-08-22T17:00:53Z) - "Why do so?" -- A Practical Perspective on Machine Learning Security [21.538956161215555]
We analyze attack occurrence and concern with 139 industrial practitioners.
Our results shed light on real-world attacks on deployed machine learning.
Our work paves the way for more research about adversarial machine learning in practice.
arXiv Detail & Related papers (2022-07-11T19:58:56Z) - Adversarial Robustness of Deep Reinforcement Learning based Dynamic
Recommender Systems [50.758281304737444]
We propose to explore adversarial examples and attack detection on reinforcement learning-based interactive recommendation systems.
We first craft different types of adversarial examples by adding perturbations to the input and intervening on the casual factors.
Then, we augment recommendation systems by detecting potential attacks with a deep learning-based classifier based on the crafted data.
arXiv Detail & Related papers (2021-12-02T04:12:24Z) - Adversarial defense for automatic speaker verification by cascaded
self-supervised learning models [101.42920161993455]
More and more malicious attackers attempt to launch adversarial attacks at automatic speaker verification (ASV) systems.
We propose a standard and attack-agnostic method based on cascaded self-supervised learning models to purify the adversarial perturbations.
Experimental results demonstrate that the proposed method achieves effective defense performance and can successfully counter adversarial attacks.
arXiv Detail & Related papers (2021-02-14T01:56:43Z) - ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine
Learning Models [64.03398193325572]
Inference attacks against Machine Learning (ML) models allow adversaries to learn about training data, model parameters, etc.
We concentrate on four attacks - namely, membership inference, model inversion, attribute inference, and model stealing.
Our analysis relies on a modular re-usable software, ML-Doctor, which enables ML model owners to assess the risks of deploying their models.
arXiv Detail & Related papers (2021-02-04T11:35:13Z) - Learning to Attack: Towards Textual Adversarial Attacking in Real-world
Situations [81.82518920087175]
Adversarial attacking aims to fool deep neural networks with adversarial examples.
We propose a reinforcement learning based attack model, which can learn from attack history and launch attacks more efficiently.
arXiv Detail & Related papers (2020-09-19T09:12:24Z) - Improving Robustness to Model Inversion Attacks via Mutual Information
Regularization [12.079281416410227]
This paper studies defense mechanisms against model inversion (MI) attacks.
MI is a type of privacy attacks aimed at inferring information about the training data distribution given the access to a target machine learning model.
We propose the Mutual Information Regularization based Defense (MID) against MI attacks.
arXiv Detail & Related papers (2020-09-11T06:02:44Z) - Adversarial Robustness for Machine Learning Cyber Defenses Using Log
Data [0.0]
We develop a testing framework to evaluate adversarial robustness of machine learning cyber defenses.
We validate our framework using a publicly available dataset and demonstrate that our adversarial attack does succeed against the target systems.
We apply our framework to analyze the influence of different levels of dropout regularization and find that higher dropout levels increases robustness.
arXiv Detail & Related papers (2020-07-29T17:51:29Z) - Defense for Black-box Attacks on Anti-spoofing Models by Self-Supervised
Learning [71.17774313301753]
We explore the robustness of self-supervised learned high-level representations by using them in the defense against adversarial attacks.
Experimental results on the ASVspoof 2019 dataset demonstrate that high-level representations extracted by Mockingjay can prevent the transferability of adversarial examples.
arXiv Detail & Related papers (2020-06-05T03:03:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.