On Almost-Sure Intention Deception Planning that Exploits Imperfect
Observers
- URL: http://arxiv.org/abs/2209.00573v1
- Date: Thu, 1 Sep 2022 16:38:03 GMT
- Title: On Almost-Sure Intention Deception Planning that Exploits Imperfect
Observers
- Authors: Jie Fu
- Abstract summary: Intention deception involves computing a strategy which deceives the opponent into a wrong belief about the agent's intention or objective.
This paper studies a class of probabilistic planning problems with intention deception and investigates how a defender's limited sensing modality can be exploited.
- Score: 24.11353445650682
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Intention deception involves computing a strategy which deceives the opponent
into a wrong belief about the agent's intention or objective. This paper
studies a class of probabilistic planning problems with intention deception and
investigates how a defender's limited sensing modality can be exploited by an
attacker to achieve its attack objective almost surely (with probability one)
while hiding its intention. In particular, we model the attack planning in a
stochastic system modeled as a Markov decision process (MDP). The attacker is
to reach some target states while avoiding unsafe states in the system and
knows that his behavior is monitored by a defender with partial observations.
Given partial state observations for the defender, we develop qualitative
intention deception planning algorithms that construct attack strategies to
play against an action-visible defender and an action-invisible defender,
respectively. The synthesized attack strategy not only ensures the attack
objective is satisfied almost surely but also deceives the defender into
believing that the observed behavior is generated by a normal/legitimate user
and thus failing to detect the presence of an attack. We show the proposed
algorithms are correct and complete and illustrate the deceptive planning
methods with examples.
Related papers
- A First Physical-World Trajectory Prediction Attack via LiDAR-induced Deceptions in Autonomous Driving [23.08193005790747]
Existing attacks compromise the prediction model of a victim AV.
We propose a novel two-stage attack framework to realize the single-point attack.
Our attack causes a collision rate of up to 63% and various hazardous responses of the victim AV.
arXiv Detail & Related papers (2024-06-17T16:26:00Z) - Planning for Attacker Entrapment in Adversarial Settings [16.085007590604327]
We propose a framework to generate a defense strategy against an attacker who is working in an environment where a defender can operate without the attacker's knowledge.
Our problem formulation allows us to capture it as a much simpler infinite horizon discounted MDP, in which the optimal policy for the MDP gives the defender's strategy against the actions of the attacker.
arXiv Detail & Related papers (2023-03-01T21:08:27Z) - Guidance Through Surrogate: Towards a Generic Diagnostic Attack [101.36906370355435]
We develop a guided mechanism to avoid local minima during attack optimization, leading to a novel attack dubbed Guided Projected Gradient Attack (G-PGA)
Our modified attack does not require random restarts, large number of attack iterations or search for an optimal step-size.
More than an effective attack, G-PGA can be used as a diagnostic tool to reveal elusive robustness due to gradient masking in adversarial defenses.
arXiv Detail & Related papers (2022-12-30T18:45:23Z) - Object-fabrication Targeted Attack for Object Detection [54.10697546734503]
adversarial attack for object detection contains targeted attack and untargeted attack.
New object-fabrication targeted attack mode can mislead detectors tofabricate extra false objects with specific target labels.
arXiv Detail & Related papers (2022-12-13T08:42:39Z) - Understanding the Vulnerability of Skeleton-based Human Activity Recognition via Black-box Attack [53.032801921915436]
Human Activity Recognition (HAR) has been employed in a wide range of applications, e.g. self-driving cars.
Recently, the robustness of skeleton-based HAR methods have been questioned due to their vulnerability to adversarial attacks.
We show such threats exist, even when the attacker only has access to the input/output of the model.
We propose the very first black-box adversarial attack approach in skeleton-based HAR called BASAR.
arXiv Detail & Related papers (2022-11-21T09:51:28Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - Zero-Query Transfer Attacks on Context-Aware Object Detectors [95.18656036716972]
Adversarial attacks perturb images such that a deep neural network produces incorrect classification results.
A promising approach to defend against adversarial attacks on natural multi-object scenes is to impose a context-consistency check.
We present the first approach for generating context-consistent adversarial attacks that can evade the context-consistency check.
arXiv Detail & Related papers (2022-03-29T04:33:06Z) - Attack Prediction using Hidden Markov Model [2.2559617939136505]
We propose the use of Hidden Markov Model (HMM) to predict the family of related attacks.
We have built an HMM-based prediction model and implemented our proposed approach using Viterbi algorithm.
As a proof of concept and also to demonstrate the performance of the model, we have conducted a case study on predicting a family of attacks called Action Spoofing.
arXiv Detail & Related papers (2021-06-03T17:32:06Z) - Attack Agnostic Adversarial Defense via Visual Imperceptible Bound [70.72413095698961]
This research aims to design a defense model that is robust within a certain bound against both seen and unseen adversarial attacks.
The proposed defense model is evaluated on the MNIST, CIFAR-10, and Tiny ImageNet databases.
The proposed algorithm is attack agnostic, i.e. it does not require any knowledge of the attack algorithm.
arXiv Detail & Related papers (2020-10-25T23:14:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.