Vision-based Perimeter Defense via Multiview Pose Estimation
- URL: http://arxiv.org/abs/2209.12136v1
- Date: Sun, 25 Sep 2022 03:41:45 GMT
- Title: Vision-based Perimeter Defense via Multiview Pose Estimation
- Authors: Elijah S. Lee, Giuseppe Loianno, Dinesh Jayaraman, Vijay Kumar
- Abstract summary: We study the perimeter defense game in a photo-realistic simulator and the real world.
We train a deep machine learning-based system for intruder pose detection with domain randomization.
We newly introduce performance metrics to evaluate the vision-based perimeter defense.
- Score: 23.62649649982264
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Previous studies in the perimeter defense game have largely focused on the
fully observable setting where the true player states are known to all players.
However, this is unrealistic for practical implementation since defenders may
have to perceive the intruders and estimate their states. In this work, we
study the perimeter defense game in a photo-realistic simulator and the real
world, requiring defenders to estimate intruder states from vision. We train a
deep machine learning-based system for intruder pose detection with domain
randomization that aggregates multiple views to reduce state estimation errors
and adapt the defensive strategy to account for this. We newly introduce
performance metrics to evaluate the vision-based perimeter defense. Through
extensive experiments, we show that our approach improves state estimation, and
eventually, perimeter defense performance in both 1-defender-vs-1-intruder
games, and 2-defenders-vs-1-intruder games.
Related papers
- Better Prevent than Tackle: Valuing Defense in Soccer Based on Graph Neural Networks [22.27208191198993]
DEFCON (DEFensive CONtribution evaluator) is a framework that quantifies player-level defensive contributions for every attacking situation in soccer.<n>DEFCON estimates the success probability and expected value of each attacking option, along with each defender's responsibility for stopping it.<n>It assigns positive or negative credits to defenders according to whether they reduced or increased the opponent's Expected Possession Value.
arXiv Detail & Related papers (2025-12-11T07:12:23Z) - SoK: The Last Line of Defense: On Backdoor Defense Evaluation [21.126129826672894]
Backdoor attacks pose a significant threat to deep learning models by implanting hidden vulnerabilities that can be activated by malicious inputs.<n>This work presents a systematic (meta-)analysis of backdoor defenses through a comprehensive literature review and empirical evaluation.<n>We analyzed 183 backdoor defense papers published between 2018 and 2025 across major AI and security venues.
arXiv Detail & Related papers (2025-11-17T08:51:18Z) - The Attacker Moves Second: Stronger Adaptive Attacks Bypass Defenses Against Llm Jailbreaks and Prompt Injections [74.60337113759313]
Current defenses against jailbreaks and prompt injections are typically evaluated against a static set of harmful attack strings.<n>We argue that this evaluation process is flawed. Instead, we should evaluate defenses against adaptive attackers who explicitly modify their attack strategy to counter a defense's design.
arXiv Detail & Related papers (2025-10-10T05:51:04Z) - A Critical Evaluation of Defenses against Prompt Injection Attacks [95.81023801370073]
Large Language Models (LLMs) are vulnerable to prompt injection attacks.<n>Several defenses have recently been proposed, often claiming to mitigate these attacks successfully.<n>We argue that existing studies lack a principled approach to evaluating these defenses.
arXiv Detail & Related papers (2025-05-23T19:39:56Z) - Two Heads Are Better than One: Model-Weight and Latent-Space Analysis for Federated Learning on Non-iid Data against Poisoning Attacks [12.325216357472137]
Federated Learning (FL) is a popular paradigm that enables remote clients to jointly train a global model without sharing their raw data.
FL has been shown to be vulnerable towards model poisoning attacks due to its distributed nature.
We propose GeminiGuard to be lightweight, versatile, and unsupervised so that it aligns well with the practical requirements of deploying such defenses.
arXiv Detail & Related papers (2025-03-30T02:56:05Z) - Decoding FL Defenses: Systemization, Pitfalls, and Remedies [16.907513505608666]
There are no guidelines for evaluating Federated Learning (FL) defenses.
We design a comprehensive systemization of FL defenses along three dimensions.
We survey 50 top-tier defense papers and identify the commonly used components in their evaluation setups.
arXiv Detail & Related papers (2025-02-03T23:14:02Z) - The VLLM Safety Paradox: Dual Ease in Jailbreak Attack and Defense [56.32083100401117]
We investigate why Vision Large Language Models (VLLMs) are prone to jailbreak attacks.
We then make a key observation: existing defense mechanisms suffer from an textbfover-prudence problem.
We find that the two representative evaluation methods for jailbreak often exhibit chance agreement.
arXiv Detail & Related papers (2024-11-13T07:57:19Z) - Hindering Adversarial Attacks with Multiple Encrypted Patch Embeddings [13.604830818397629]
We propose a new key-based defense focusing on both efficiency and robustness.
We build upon the previous defense with two major improvements: (1) efficient training and (2) optional randomization.
Experiments were carried out on the ImageNet dataset, and the proposed defense was evaluated against an arsenal of state-of-the-art attacks.
arXiv Detail & Related papers (2023-09-04T14:08:34Z) - Randomness in ML Defenses Helps Persistent Attackers and Hinders
Evaluators [49.52538232104449]
It is becoming increasingly imperative to design robust ML defenses.
Recent work has found that many defenses that initially resist state-of-the-art attacks can be broken by an adaptive adversary.
We take steps to simplify the design of defenses and argue that white-box defenses should eschew randomness when possible.
arXiv Detail & Related papers (2023-02-27T01:33:31Z) - Are Defenses for Graph Neural Networks Robust? [72.1389952286628]
We show that most Graph Neural Networks (GNNs) defenses show no or only marginal improvement compared to an undefended baseline.
We advocate using custom adaptive attacks as a gold standard and we outline the lessons we learned from successfully designing such attacks.
Our diverse collection of perturbed graphs forms a (black-box) unit test offering a first glance at a model's robustness.
arXiv Detail & Related papers (2023-01-31T15:11:48Z) - Adversarial Classification of the Attacks on Smart Grids Using Game
Theory and Deep Learning [27.69899235394942]
This paper proposes a game-theoretic approach to evaluate the variations caused by an attacker on the power measurements.
A zero-sum game is used to model the interactions between the attacker and defender.
arXiv Detail & Related papers (2021-06-06T18:43:28Z) - Improving the Adversarial Robustness for Speaker Verification by Self-Supervised Learning [95.60856995067083]
This work is among the first to perform adversarial defense for ASV without knowing the specific attack algorithms.
We propose to perform adversarial defense from two perspectives: 1) adversarial perturbation purification and 2) adversarial perturbation detection.
Experimental results show that our detection module effectively shields the ASV by detecting adversarial samples with an accuracy of around 80%.
arXiv Detail & Related papers (2021-06-01T07:10:54Z) - Game-Theoretic and Machine Learning-based Approaches for Defensive
Deception: A Survey [13.624968742674143]
This paper focuses on defensive deception research centered on game theory and machine learning.
It closes with an outline of some research directions to tackle major gaps in current defensive deception research.
arXiv Detail & Related papers (2021-01-21T21:55:43Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z) - Harnessing adversarial examples with a surprisingly simple defense [47.64219291655723]
I introduce a very simple method to defend against adversarial examples.
The basic idea is to raise the slope of the ReLU function at the test time.
Experiments over MNIST and CIFAR-10 datasets demonstrate the effectiveness of the proposed defense.
arXiv Detail & Related papers (2020-04-26T03:09:42Z) - Certified Defenses for Adversarial Patches [72.65524549598126]
Adversarial patch attacks are among the most practical threat models against real-world computer vision systems.
This paper studies certified and empirical defenses against patch attacks.
arXiv Detail & Related papers (2020-03-14T19:57:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.