Sardino: Ultra-Fast Dynamic Ensemble for Secure Visual Sensing at Mobile
Edge
- URL: http://arxiv.org/abs/2204.08189v1
- Date: Mon, 18 Apr 2022 06:54:48 GMT
- Title: Sardino: Ultra-Fast Dynamic Ensemble for Secure Visual Sensing at Mobile
Edge
- Authors: Qun Song, Zhenyu Yan, Wenjie Luo, and Rui Tan
- Abstract summary: Adversarial example attack endangers the mobile edge systems such as vehicles and drones that adopt deep neural networks for visual sensing.
This paper presents em Sardino, an active and dynamic defense approach that renews the inference ensemble at run time to develop security against the adaptive adversary.
- Score: 7.85758401939372
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial example attack endangers the mobile edge systems such as vehicles
and drones that adopt deep neural networks for visual sensing. This paper
presents {\em Sardino}, an active and dynamic defense approach that renews the
inference ensemble at run time to develop security against the adaptive
adversary who tries to exfiltrate the ensemble and construct the corresponding
effective adversarial examples. By applying consistency check and data fusion
on the ensemble's predictions, Sardino can detect and thwart adversarial
inputs. Compared with the training-based ensemble renewal, we use HyperNet to
achieve {\em one million times} acceleration and per-frame ensemble renewal
that presents the highest level of difficulty to the prerequisite exfiltration
attacks. Moreover, the robustness of the renewed ensembles against adversarial
examples is enhanced with adversarial learning for the HyperNet. We design a
run-time planner that maximizes the ensemble size in favor of security while
maintaining the processing frame rate. Beyond adversarial examples, Sardino can
also address the issue of out-of-distribution inputs effectively. This paper
presents extensive evaluation of Sardino's performance in counteracting
adversarial examples and applies it to build a real-time car-borne traffic sign
recognition system. Live on-road tests show the built system's effectiveness in
maintaining frame rate and detecting out-of-distribution inputs due to the
false positives of a preceding YOLO-based traffic sign detector.
Related papers
- Hiding in Plain Sight: An IoT Traffic Camouflage Framework for Enhanced Privacy [2.0257616108612373]
Existing single-technique obfuscation methods, such as packet padding, often fall short in dynamic environments like smart homes.
This paper introduces a multi-technique obfuscation framework designed to enhance privacy by disrupting traffic analysis.
arXiv Detail & Related papers (2025-01-26T04:33:44Z) - EdgeShield: A Universal and Efficient Edge Computing Framework for Robust AI [8.688432179052441]
We propose an edge framework design to enable universal and efficient detection of adversarial attacks.
This framework incorporates an attention-based adversarial detection methodology and a lightweight detection network formation.
The results indicate an impressive 97.43% F-score can be achieved, demonstrating the framework's proficiency in detecting adversarial attacks.
arXiv Detail & Related papers (2024-08-08T02:57:55Z) - Enhancing Tracking Robustness with Auxiliary Adversarial Defense Networks [1.7907721703063868]
Adrial attacks in visual object tracking have significantly degraded the performance of advanced trackers.
We propose an effective auxiliary pre-processing defense network, AADN, which performs defensive transformations on the input images before feeding them into the tracker.
arXiv Detail & Related papers (2024-02-28T01:42:31Z) - AdvDO: Realistic Adversarial Attacks for Trajectory Prediction [87.96767885419423]
Trajectory prediction is essential for autonomous vehicles to plan correct and safe driving behaviors.
We devise an optimization-based adversarial attack framework to generate realistic adversarial trajectories.
Our attack can lead an AV to drive off road or collide into other vehicles in simulation.
arXiv Detail & Related papers (2022-09-19T03:34:59Z) - RobustSense: Defending Adversarial Attack for Secure Device-Free Human
Activity Recognition [37.387265457439476]
We propose a novel learning framework, RobustSense, to defend common adversarial attacks.
Our method works well on wireless human activity recognition and person identification systems.
arXiv Detail & Related papers (2022-04-04T15:06:03Z) - Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS [70.60975663021952]
We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
arXiv Detail & Related papers (2021-11-23T23:42:16Z) - Modelling Adversarial Noise for Adversarial Defense [96.56200586800219]
adversarial defenses typically focus on exploiting adversarial examples to remove adversarial noise or train an adversarially robust target model.
Motivated by that the relationship between adversarial data and natural data can help infer clean data from adversarial data to obtain the final correct prediction.
We study to model adversarial noise to learn the transition relationship in the label space for using adversarial labels to improve adversarial accuracy.
arXiv Detail & Related papers (2021-09-21T01:13:26Z) - Combating Adversaries with Anti-Adversaries [118.70141983415445]
In particular, our layer generates an input perturbation in the opposite direction of the adversarial one.
We verify the effectiveness of our approach by combining our layer with both nominally and robustly trained models.
Our anti-adversary layer significantly enhances model robustness while coming at no cost on clean accuracy.
arXiv Detail & Related papers (2021-03-26T09:36:59Z) - Robust Tracking against Adversarial Attacks [69.59717023941126]
We first attempt to generate adversarial examples on top of video sequences to improve the tracking robustness against adversarial attacks.
We apply the proposed adversarial attack and defense approaches to state-of-the-art deep tracking algorithms.
arXiv Detail & Related papers (2020-07-20T08:05:55Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.