Among Us: Adversarially Robust Collaborative Perception by Consensus
- URL: http://arxiv.org/abs/2303.09495v3
- Date: Fri, 18 Aug 2023 02:40:18 GMT
- Title: Among Us: Adversarially Robust Collaborative Perception by Consensus
- Authors: Yiming Li and Qi Fang and Jiamu Bai and Siheng Chen and Felix
Juefei-Xu and Chen Feng
- Abstract summary: Multiple robots could perceive a scene (e.g., detect objects) collaboratively better than individuals.
We propose ROBOSAC, a novel sampling-based defense strategy generalizable to unseen attackers.
We validate our method on the task of collaborative 3D object detection in autonomous driving scenarios.
- Score: 50.73128191202585
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multiple robots could perceive a scene (e.g., detect objects) collaboratively
better than individuals, although easily suffer from adversarial attacks when
using deep learning. This could be addressed by the adversarial defense, but
its training requires the often-unknown attacking mechanism. Differently, we
propose ROBOSAC, a novel sampling-based defense strategy generalizable to
unseen attackers. Our key idea is that collaborative perception should lead to
consensus rather than dissensus in results compared to individual perception.
This leads to our hypothesize-and-verify framework: perception results with and
without collaboration from a random subset of teammates are compared until
reaching a consensus. In such a framework, more teammates in the sampled subset
often entail better perception performance but require longer sampling time to
reject potential attackers. Thus, we derive how many sampling trials are needed
to ensure the desired size of an attacker-free subset, or equivalently, the
maximum size of such a subset that we can successfully sample within a given
number of trials. We validate our method on the task of collaborative 3D object
detection in autonomous driving scenarios.
Related papers
- Benchmarking and Analyzing Robust Point Cloud Recognition: Bag of Tricks
for Defending Adversarial Examples [25.029854308139853]
adversarial examples on 3D point clouds make them more challenging to defend against than those on 2D images.
In this paper, we first establish a comprehensive, and rigorous point cloud adversarial robustness benchmark.
We then perform extensive and systematic experiments to identify an effective combination of these tricks.
We construct a more robust defense framework achieving an average accuracy of 83.45% against various attacks.
arXiv Detail & Related papers (2023-07-31T01:34:24Z) - Measuring Equality in Machine Learning Security Defenses: A Case Study
in Speech Recognition [56.69875958980474]
This work considers approaches to defending learned systems and how security defenses result in performance inequities across different sub-populations.
We find that many methods that have been proposed can cause direct harm, like false rejection and unequal benefits from robustness training.
We present a comparison of equality between two rejection-based defenses: randomized smoothing and neural rejection, finding randomized smoothing more equitable due to the sampling mechanism for minority groups.
arXiv Detail & Related papers (2023-02-17T16:19:26Z) - Adversarial Robustness of Deep Reinforcement Learning based Dynamic
Recommender Systems [50.758281304737444]
We propose to explore adversarial examples and attack detection on reinforcement learning-based interactive recommendation systems.
We first craft different types of adversarial examples by adding perturbations to the input and intervening on the casual factors.
Then, we augment recommendation systems by detecting potential attacks with a deep learning-based classifier based on the crafted data.
arXiv Detail & Related papers (2021-12-02T04:12:24Z) - Towards A Conceptually Simple Defensive Approach for Few-shot
classifiers Against Adversarial Support Samples [107.38834819682315]
We study a conceptually simple approach to defend few-shot classifiers against adversarial attacks.
We propose a simple attack-agnostic detection method, using the concept of self-similarity and filtering.
Our evaluation on the miniImagenet (MI) and CUB datasets exhibit good attack detection performance.
arXiv Detail & Related papers (2021-10-24T05:46:03Z) - TREATED:Towards Universal Defense against Textual Adversarial Attacks [28.454310179377302]
We propose TREATED, a universal adversarial detection method that can defend against attacks of various perturbation levels without making any assumptions.
Extensive experiments on three competitive neural networks and two widely used datasets show that our method achieves better detection performance than baselines.
arXiv Detail & Related papers (2021-09-13T03:31:20Z) - Internal Wasserstein Distance for Adversarial Attack and Defense [40.27647699862274]
We propose an internal Wasserstein distance (IWD) to measure image similarity between a sample and its adversarial example.
We develop a novel attack method by capturing the distribution of patches in original samples.
We also build a new defense method that seeks to learn robust models to defend against unseen adversarial examples.
arXiv Detail & Related papers (2021-03-13T02:08:02Z) - Learning to Separate Clusters of Adversarial Representations for Robust
Adversarial Detection [50.03939695025513]
We propose a new probabilistic adversarial detector motivated by a recently introduced non-robust feature.
In this paper, we consider the non-robust features as a common property of adversarial examples, and we deduce it is possible to find a cluster in representation space corresponding to the property.
This idea leads us to probability estimate distribution of adversarial representations in a separate cluster, and leverage the distribution for a likelihood based adversarial detector.
arXiv Detail & Related papers (2020-12-07T07:21:18Z) - Defensive Few-shot Learning [77.82113573388133]
This paper investigates a new challenging problem called defensive few-shot learning.
It aims to learn a robust few-shot model against adversarial attacks.
The proposed framework can effectively make the existing few-shot models robust against adversarial attacks.
arXiv Detail & Related papers (2019-11-16T05:57:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.