Attack on Scene Flow using Point Clouds
- URL: http://arxiv.org/abs/2404.13621v3
- Date: Tue, 18 Jun 2024 01:40:23 GMT
- Title: Attack on Scene Flow using Point Clouds
- Authors: Haniyeh Ehsani Oskouie, Mohammad-Shahram Moin, Shohreh Kasaei,
- Abstract summary: This paper introduces adversarial white-box attacks specifically tailored for scene flow networks.
Experimental results show that the generated adversarial examples obtain up to 33.7 relative degradation in average end-point error.
The study also reveals the significant impact that attacks targeting point clouds in only one dimension or color channel have on average end-point error.
- Score: 9.115508086522887
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Deep neural networks have made significant advancements in accurately estimating scene flow using point clouds, which is vital for many applications like video analysis, action recognition, and navigation. The robustness of these techniques, however, remains a concern, particularly in the face of adversarial attacks that have been proven to deceive state-of-the-art deep neural networks in many domains. Surprisingly, the robustness of scene flow networks against such attacks has not been thoroughly investigated. To address this problem, the proposed approach aims to bridge this gap by introducing adversarial white-box attacks specifically tailored for scene flow networks. Experimental results show that the generated adversarial examples obtain up to 33.7 relative degradation in average end-point error on the KITTI and FlyingThings3D datasets. The study also reveals the significant impact that attacks targeting point clouds in only one dimension or color channel have on average end-point error. Analyzing the success and failure of these attacks on the scene flow networks and their 2D optical flow network variants shows a higher vulnerability for the optical flow networks.
Related papers
- MirrorAttack: Backdoor Attack on 3D Point Cloud with a Distorting Mirror [5.627919459380763]
MirrorAttack is a novel effective 3D backdoor attack method.
It implants the trigger by simply reconstructing a clean point cloud with an auto-encoder.
We achieve state-of-the-art ASR on different types of victim models with the intervention of defensive techniques.
arXiv Detail & Related papers (2024-03-09T09:15:37Z) - Exploring Geometry of Blind Spots in Vision Models [56.47644447201878]
We study the phenomenon of under-sensitivity in vision models such as CNNs and Transformers.
We propose a Level Set Traversal algorithm that iteratively explores regions of high confidence with respect to the input space.
We estimate the extent of these connected higher-dimensional regions over which the model maintains a high degree of confidence.
arXiv Detail & Related papers (2023-10-30T18:00:33Z) - Adversarial Attacks on Leakage Detectors in Water Distribution Networks [6.125017875330933]
We propose a taxonomy for adversarial attacks against machine learning based leakage detectors in water distribution networks.
Based on a mathematical formalization of the least sensitive point problem, we use three different algorithmic approaches to find a solution.
arXiv Detail & Related papers (2023-05-25T12:05:18Z) - Ada3Diff: Defending against 3D Adversarial Point Clouds via Adaptive
Diffusion [70.60038549155485]
Deep 3D point cloud models are sensitive to adversarial attacks, which poses threats to safety-critical applications such as autonomous driving.
This paper introduces a novel distortion-aware defense framework that can rebuild the pristine data distribution with a tailored intensity estimator and a diffusion model.
arXiv Detail & Related papers (2022-11-29T14:32:43Z) - PointCA: Evaluating the Robustness of 3D Point Cloud Completion Models
Against Adversarial Examples [63.84378007819262]
We propose PointCA, the first adversarial attack against 3D point cloud completion models.
PointCA can generate adversarial point clouds that maintain high similarity with the original ones.
We show that PointCA can cause a performance degradation from 77.9% to 16.7%, with the structure chamfer distance kept below 0.01.
arXiv Detail & Related papers (2022-11-22T14:15:41Z) - Dynamics-aware Adversarial Attack of 3D Sparse Convolution Network [75.1236305913734]
We investigate the dynamics-aware adversarial attack problem in deep neural networks.
Most existing adversarial attack algorithms are designed under a basic assumption -- the network architecture is fixed throughout the attack process.
We propose a Leaded Gradient Method (LGM) and show the significant effects of the lagged gradient.
arXiv Detail & Related papers (2021-12-17T10:53:35Z) - Spatially Focused Attack against Spatiotemporal Graph Neural Networks [8.665638585791235]
Deep Spatiotemporal graph neural networks (GNNs) have achieved great success in traffic forecasting applications.
If GNNs are vulnerable in real-world prediction applications, a hacker can easily manipulate the results and cause serious traffic congestion and even a city-scale breakdown.
arXiv Detail & Related papers (2021-09-10T01:31:53Z) - LG-GAN: Label Guided Adversarial Network for Flexible Targeted Attack of
Point Cloud-based Deep Networks [123.5839352227726]
This paper proposes a novel label guided adversarial network (LG-GAN) for real-time flexible targeted point cloud attack.
To the best of our knowledge, this is the first generation based 3D point cloud attack method.
arXiv Detail & Related papers (2020-11-01T17:17:10Z) - IF-Defense: 3D Adversarial Point Cloud Defense via Implicit Function
based Restoration [68.88711148515682]
Deep neural networks are vulnerable to various 3D adversarial attacks.
We propose an IF-Defense framework to directly optimize the coordinates of input points with geometry-aware and distribution-aware constraints.
Our results show that IF-Defense achieves the state-of-the-art defense performance against existing 3D adversarial attacks on PointNet, PointNet++, DGCNN, PointConv and RS-CNN.
arXiv Detail & Related papers (2020-10-11T15:36:40Z) - Minimal Adversarial Examples for Deep Learning on 3D Point Clouds [25.569519066857705]
In this work, we explore adversarial attacks for point cloud-based neural networks.
We propose a unified formulation for adversarial point cloud generation that can generalise two different attack strategies.
Our method achieves the state-of-the-art performance with higher than 89% and 90% of attack success rate on synthetic and real-world data respectively.
arXiv Detail & Related papers (2020-08-27T11:50:45Z) - Monocular Depth Estimators: Vulnerabilities and Attacks [6.821598757786515]
Recent advancements of neural networks lead to reliable monocular depth estimation.
Deep neural networks are highly vulnerable to adversarial samples for tasks like classification, detection and segmentation.
In this paper, we investigate the annihilation of the most state-of-the-art monocular depth estimation networks against adversarial attacks.
arXiv Detail & Related papers (2020-05-28T21:25:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.