Defending Black-box Skeleton-based Human Activity Classifiers
- URL: http://arxiv.org/abs/2203.04713v1
- Date: Wed, 9 Mar 2022 13:46:10 GMT
- Title: Defending Black-box Skeleton-based Human Activity Classifiers
- Authors: He Wang, Yunfeng Diao, Zichang Tan, Guodong Guo
- Abstract summary: In this paper, we investigate skeleton-based Human Activity Recognition, which is an important type of time-series data but under-explored in defense against attacks.
We name our framework Bayesian Energy-based Adversarial Training or BEAT. BEAT is straightforward but elegant, which turns vulnerable black-box classifiers into robust ones without sacrificing accuracy.
- Score: 38.95979614080714
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning has been regarded as the `go to' solution for many tasks today,
but its intrinsic vulnerability to malicious attacks has become a major
concern. The vulnerability is affected by a variety of factors including
models, tasks, data, and attackers. Consequently, methods such as Adversarial
Training and Randomized Smoothing have been proposed to tackle the problem in a
wide range of applications. In this paper, we investigate skeleton-based Human
Activity Recognition, which is an important type of time-series data but
under-explored in defense against attacks. Our method is featured by (1) a new
Bayesian Energy-based formulation of robust discriminative classifiers, (2) a
new parameterization of the adversarial sample manifold of actions, and (3) a
new post-train Bayesian treatment on both the adversarial samples and the
classifier. We name our framework Bayesian Energy-based Adversarial Training or
BEAT. BEAT is straightforward but elegant, which turns vulnerable black-box
classifiers into robust ones without sacrificing accuracy. It demonstrates
surprising and universal effectiveness across a wide range of action
classifiers and datasets, under various attacks.
Related papers
- Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - Post-train Black-box Defense via Bayesian Boundary Correction [9.769249984262958]
We propose a new post-train black-box defense framework for deep neural networks.
It can turn any pre-trained classifier into a resilient one with little knowledge of the model specifics.
It is further equipped with a new post-train strategy keeps the victim intact, avoiding re-training.
arXiv Detail & Related papers (2023-06-29T14:33:20Z) - Understanding the Vulnerability of Skeleton-based Human Activity Recognition via Black-box Attack [53.032801921915436]
Human Activity Recognition (HAR) has been employed in a wide range of applications, e.g. self-driving cars.
Recently, the robustness of skeleton-based HAR methods have been questioned due to their vulnerability to adversarial attacks.
We show such threats exist, even when the attacker only has access to the input/output of the model.
We propose the very first black-box adversarial attack approach in skeleton-based HAR called BASAR.
arXiv Detail & Related papers (2022-11-21T09:51:28Z) - Adversarial Robustness of Deep Reinforcement Learning based Dynamic
Recommender Systems [50.758281304737444]
We propose to explore adversarial examples and attack detection on reinforcement learning-based interactive recommendation systems.
We first craft different types of adversarial examples by adding perturbations to the input and intervening on the casual factors.
Then, we augment recommendation systems by detecting potential attacks with a deep learning-based classifier based on the crafted data.
arXiv Detail & Related papers (2021-12-02T04:12:24Z) - Towards A Conceptually Simple Defensive Approach for Few-shot
classifiers Against Adversarial Support Samples [107.38834819682315]
We study a conceptually simple approach to defend few-shot classifiers against adversarial attacks.
We propose a simple attack-agnostic detection method, using the concept of self-similarity and filtering.
Our evaluation on the miniImagenet (MI) and CUB datasets exhibit good attack detection performance.
arXiv Detail & Related papers (2021-10-24T05:46:03Z) - ATRO: Adversarial Training with a Rejection Option [10.36668157679368]
This paper proposes a classification framework with a rejection option to mitigate the performance deterioration caused by adversarial examples.
Applying the adversarial training objective to both a classifier and a rejection function simultaneously, we can choose to abstain from classification when it has insufficient confidence to classify a test data point.
arXiv Detail & Related papers (2020-10-24T14:05:03Z) - Sampling Attacks: Amplification of Membership Inference Attacks by
Repeated Queries [74.59376038272661]
We introduce sampling attack, a novel membership inference technique that unlike other standard membership adversaries is able to work under severe restriction of no access to scores of the victim model.
We show that a victim model that only publishes the labels is still susceptible to sampling attacks and the adversary can recover up to 100% of its performance.
For defense, we choose differential privacy in the form of gradient perturbation during the training of the victim model as well as output perturbation at prediction time.
arXiv Detail & Related papers (2020-09-01T12:54:54Z) - Adversarial Feature Desensitization [12.401175943131268]
We propose a novel approach to adversarial robustness, which builds upon the insights from the domain adaptation field.
Our method, called Adversarial Feature Desensitization (AFD), aims at learning features that are invariant towards adversarial perturbations of the inputs.
arXiv Detail & Related papers (2020-06-08T14:20:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.