Balancing detectability and performance of attacks on the control
channel of Markov Decision Processes
- URL: http://arxiv.org/abs/2109.07171v1
- Date: Wed, 15 Sep 2021 09:13:10 GMT
- Title: Balancing detectability and performance of attacks on the control
channel of Markov Decision Processes
- Authors: Alessio Russo, Alexandre Proutiere
- Abstract summary: We investigate the problem of designing optimal stealthy poisoning attacks on the control channel of Markov decision processes (MDPs)
This research is motivated by the recent interest of the research community for adversarial and poisoning attacks applied to MDPs, and reinforcement learning (RL) methods.
- Score: 77.66954176188426
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We investigate the problem of designing optimal stealthy poisoning attacks on
the control channel of Markov decision processes (MDPs). This research is
motivated by the recent interest of the research community for adversarial and
poisoning attacks applied to MDPs, and reinforcement learning (RL) methods. The
policies resulting from these methods have been shown to be vulnerable to
attacks perturbing the observations of the decision-maker. In such an attack,
drawing inspiration from adversarial examples used in supervised learning, the
amplitude of the adversarial perturbation is limited according to some norm,
with the hope that this constraint will make the attack imperceptible. However,
such constraints do not grant any level of undetectability and do not take into
account the dynamic nature of the underlying Markov process. In this paper, we
propose a new attack formulation, based on information-theoretical quantities,
that considers the objective of minimizing the detectability of the attack as
well as the performance of the controlled process. We analyze the trade-off
between the efficiency of the attack and its detectability. We conclude with
examples and numerical simulations illustrating this trade-off.
Related papers
- Indiscriminate Disruption of Conditional Inference on Multivariate Gaussians [60.22542847840578]
Despite advances in adversarial machine learning, inference for Gaussian models in the presence of an adversary is notably understudied.
We consider a self-interested attacker who wishes to disrupt a decisionmaker's conditional inference and subsequent actions by corrupting a set of evidentiary variables.
To avoid detection, the attacker also desires the attack to appear plausible wherein plausibility is determined by the density of the corrupted evidence.
arXiv Detail & Related papers (2024-11-21T17:46:55Z) - Detecting Adversarial Attacks in Semantic Segmentation via Uncertainty Estimation: A Deep Analysis [12.133306321357999]
We propose an uncertainty-based method for detecting adversarial attacks on neural networks for semantic segmentation.
We conduct a detailed analysis of uncertainty-based detection of adversarial attacks and various state-of-the-art neural networks.
Our numerical experiments show the effectiveness of the proposed uncertainty-based detection method.
arXiv Detail & Related papers (2024-08-19T14:13:30Z) - Simple Perturbations Subvert Ethereum Phishing Transactions Detection: An Empirical Analysis [12.607077453567594]
We investigate the impact of various adversarial attack strategies on model performance metrics, such as accuracy, precision, recall, and F1-score.
We examine the effectiveness of different mitigation strategies, including adversarial training and enhanced feature selection, in enhancing model robustness.
arXiv Detail & Related papers (2024-08-06T20:40:20Z) - Adversarial Purification for Data-Driven Power System Event Classifiers
with Diffusion Models [0.8848340429852071]
Global deployment of phasor measurement units (PMUs) enables real-time monitoring of the power system.
Recent studies reveal that machine learning-based methods are vulnerable to adversarial attacks.
This paper proposes an effective adversarial purification method based on the diffusion model to counter adversarial attacks.
arXiv Detail & Related papers (2023-11-13T06:52:56Z) - Confidence-driven Sampling for Backdoor Attacks [49.72680157684523]
Backdoor attacks aim to surreptitiously insert malicious triggers into DNN models, granting unauthorized control during testing scenarios.
Existing methods lack robustness against defense strategies and predominantly focus on enhancing trigger stealthiness while randomly selecting poisoned samples.
We introduce a straightforward yet highly effective sampling methodology that leverages confidence scores. Specifically, it selects samples with lower confidence scores, significantly increasing the challenge for defenders in identifying and countering these attacks.
arXiv Detail & Related papers (2023-10-08T18:57:36Z) - Adversarial Attacks Against Uncertainty Quantification [10.655660123083607]
This work focuses on a different adversarial scenario in which the attacker is still interested in manipulating the uncertainty estimate.
In particular, the goal is to undermine the use of machine-learning models when their outputs are consumed by a downstream module or by a human operator.
arXiv Detail & Related papers (2023-09-19T12:54:09Z) - The Adversarial Implications of Variable-Time Inference [47.44631666803983]
We present an approach that exploits a novel side channel in which the adversary simply measures the execution time of the algorithm used to post-process the predictions of the ML model under attack.
We investigate leakage from the non-maximum suppression (NMS) algorithm, which plays a crucial role in the operation of object detectors.
We demonstrate attacks against the YOLOv3 detector, leveraging the timing leakage to successfully evade object detection using adversarial examples, and perform dataset inference.
arXiv Detail & Related papers (2023-09-05T11:53:17Z) - Policy Smoothing for Provably Robust Reinforcement Learning [109.90239627115336]
We study the provable robustness of reinforcement learning against norm-bounded adversarial perturbations of the inputs.
We generate certificates that guarantee that the total reward obtained by the smoothed policy will not fall below a certain threshold under a norm-bounded adversarial of perturbation the input.
arXiv Detail & Related papers (2021-06-21T21:42:08Z) - A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
and Learning [122.49765136434353]
We present an effective method, called Hamiltonian Monte Carlo with Accumulated Momentum (HMCAM), aiming to generate a sequence of adversarial examples.
We also propose a new generative method called Contrastive Adversarial Training (CAT), which approaches equilibrium distribution of adversarial examples.
Both quantitative and qualitative analysis on several natural image datasets and practical systems have confirmed the superiority of the proposed algorithm.
arXiv Detail & Related papers (2020-10-15T16:07:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.