Underload: Defending against Latency Attacks for Object Detectors on Edge Devices
- URL: http://arxiv.org/abs/2412.02171v1
- Date: Tue, 03 Dec 2024 05:00:26 GMT
- Title: Underload: Defending against Latency Attacks for Object Detectors on Edge Devices
- Authors: Tianyi Wang, Zichen Wang, Cong Wang, Yuanchao Shu, Ruilong Deng, Peng Cheng, Jiming Chen,
- Abstract summary: A new class of latency attacks are reported recently targeting the real-time processing capability of object detectors.
We take an initial attempt to defend against this attack via background-attentive adversarial training.
Experiments demonstrate the defense effectiveness of restoring real-time processing capability from $13$ FPS to $43$ FPS.
- Score: 21.359326502877487
- License:
- Abstract: Object detection is a fundamental enabler for many real-time downstream applications such as autonomous driving, augmented reality and supply chain management. However, the algorithmic backbone of neural networks is brittle to imperceptible perturbations in the system inputs, which were generally known as misclassifying attacks. By targeting the real-time processing capability, a new class of latency attacks are reported recently. They exploit new attack surfaces in object detectors by creating a computational bottleneck in the post-processing module, that leads to cascading failure and puts the real-time downstream tasks at risks. In this work, we take an initial attempt to defend against this attack via background-attentive adversarial training that is also cognizant of the underlying hardware capabilities. We first draw system-level connections between latency attack and hardware capacity across heterogeneous GPU devices. Based on the particular adversarial behaviors, we utilize objectness loss as a proxy and build background attention into the adversarial training pipeline, and achieve a reasonable balance between clean and robust accuracy. The extensive experiments demonstrate the defense effectiveness of restoring real-time processing capability from $13$ FPS to $43$ FPS on Jetson Orin NX, with a better trade-off between the clean and robust accuracy.
Related papers
- The Inherent Adversarial Robustness of Analog In-Memory Computing [2.435021773579434]
A key challenge for Deep Neural Network (DNN) algorithms is their vulnerability to adversarial attacks.
In this paper, we experimentally validate a conjecture for the first time on an AIMC chip based on Phase Change Memory (PCM) devices.
Additional robustness is also observed when performing hardware-in-theloop attacks.
arXiv Detail & Related papers (2024-11-11T14:29:59Z) - Real-Time Zero-Day Intrusion Detection System for Automotive Controller
Area Network on FPGAs [13.581341206178525]
This paper presents an unsupervised-learning-based convolutional autoencoder architecture for detecting zero-day attacks.
We quantise the model using Vitis-AI tools from AMD/Xilinx targeting a resource-constrained Zynq Ultrascale platform.
The proposed model successfully achieves equal or higher classification accuracy (> 99.5%) on unseen DoS, fuzzing, and spoofing attacks.
arXiv Detail & Related papers (2024-01-19T14:36:01Z) - Overload: Latency Attacks on Object Detection for Edge Devices [47.9744734181236]
This paper investigates latency attacks on deep learning applications.
Unlike common adversarial attacks for misclassification, the goal of latency attacks is to increase the inference time.
We use object detection to demonstrate how such kind of attacks work.
arXiv Detail & Related papers (2023-04-11T17:24:31Z) - Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS [70.60975663021952]
We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
arXiv Detail & Related papers (2021-11-23T23:42:16Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - IoU Attack: Towards Temporally Coherent Black-Box Adversarial Attack for
Visual Object Tracking [70.14487738649373]
Adrial attack arises due to the vulnerability of deep neural networks to perceive input samples injected with imperceptible perturbations.
We propose a decision-based black-box attack method for visual object tracking.
We validate the proposed IoU attack on state-of-the-art deep trackers.
arXiv Detail & Related papers (2021-03-27T16:20:32Z) - Hardware Accelerator for Adversarial Attacks on Deep Learning Neural
Networks [7.20382137043754]
A class of adversarial attack network algorithms has been proposed to generate robust physical perturbations.
In this paper, we propose the first hardware accelerator for adversarial attacks based on memristor crossbar arrays.
arXiv Detail & Related papers (2020-08-03T21:55:41Z) - Robust Tracking against Adversarial Attacks [69.59717023941126]
We first attempt to generate adversarial examples on top of video sequences to improve the tracking robustness against adversarial attacks.
We apply the proposed adversarial attack and defense approaches to state-of-the-art deep tracking algorithms.
arXiv Detail & Related papers (2020-07-20T08:05:55Z) - Firearm Detection and Segmentation Using an Ensemble of Semantic Neural
Networks [62.997667081978825]
We present a weapon detection system based on an ensemble of semantic Convolutional Neural Networks.
A set of simpler neural networks dedicated to specific tasks requires less computational resources and can be trained in parallel.
The overall output of the system given by the aggregation of the outputs of individual networks can be tuned by a user to trade-off false positives and false negatives.
arXiv Detail & Related papers (2020-02-11T13:58:16Z) - Pelican: A Deep Residual Network for Network Intrusion Detection [7.562843347215287]
We propose a deep neural network, Pelican, that is built upon specially-designed residual blocks.
Pelican can achieve a high attack detection performance while keeping a much low false alarm rate.
arXiv Detail & Related papers (2020-01-19T05:07:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.