Adversarial Patch Attacks on Vision-Based Cargo Occupancy Estimation via Differentiable 3D Simulation
- URL: http://arxiv.org/abs/2511.19254v1
- Date: Mon, 24 Nov 2025 16:05:40 GMT
- Title: Adversarial Patch Attacks on Vision-Based Cargo Occupancy Estimation via Differentiable 3D Simulation
- Authors: Mohamed Rissal Hedna, Sesugh Samuel Nder,
- Abstract summary: We study the feasibility of such attacks on a convolutional cargo-occupancy classifier using fully simulated 3D environments.<n>Our experiments demonstrate that 3D-optimized patches achieve high attack success rates, especially in a denial-of-service scenario.<n>This is the first study to investigate adversarial patch attacks for cargo-occupancy estimation in physically realistic, fully simulated 3D scenes.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Computer vision systems are increasingly adopted in modern logistics operations, including the estimation of trailer occupancy for planning, routing, and billing. Although effective, such systems may be vulnerable to physical adversarial attacks, particularly adversarial patches that can be printed and placed on interior surfaces. In this work, we study the feasibility of such attacks on a convolutional cargo-occupancy classifier using fully simulated 3D environments. Using Mitsuba 3 for differentiable rendering, we optimize patch textures across variations in geometry, lighting, and viewpoint, and compare their effectiveness to a 2D compositing baseline. Our experiments demonstrate that 3D-optimized patches achieve high attack success rates, especially in a denial-of-service scenario (empty to full), where success reaches 84.94 percent. Concealment attacks (full to empty) prove more challenging but still reach 30.32 percent. We analyze the factors influencing attack success, discuss implications for the security of automated logistics pipelines, and highlight directions for strengthening physical robustness. To our knowledge, this is the first study to investigate adversarial patch attacks for cargo-occupancy estimation in physically realistic, fully simulated 3D scenes.
Related papers
- 3DGAA: Realistic and Robust 3D Gaussian-based Adversarial Attack for Autonomous Driving [17.849564138275845]
We propose 3D Gaussian-based Adrial Attack (3DGAA), a novel adversarial object generation framework.<n>Unlike prior works that rely on patches or texture optimization, 3DGAA jointly perturbs both geometric attributes and appearance attributes.<n>We show that 3DGAA achieves to reduce the detection mAP from 87.21% to 7.38%, significantly outperforming existing 3D physical attacks.
arXiv Detail & Related papers (2025-07-14T07:27:52Z) - 3D Gaussian Splatting Driven Multi-View Robust Physical Adversarial Camouflage Generation [50.03578546845548]
Physical adversarial attack methods expose the vulnerabilities of deep neural networks and pose a significant threat to safety-critical scenarios such as autonomous driving.<n> Camouflage-based physical attack is a more promising approach compared to the patch-based attack, offering stronger adversarial effectiveness in complex physical environments.<n>We propose a physical attack framework based on 3D Gaussian Splatting (3DGS), named PGA, which provides rapid and precise reconstruction with few images.
arXiv Detail & Related papers (2025-07-02T05:10:16Z) - AdvReal: Physical Adversarial Patch Generation Framework for Security Evaluation of Object Detection Systems [13.653653250544004]
We propose a unified joint adversarial training framework for both 2D and 3D domains.<n>We develop a realism enhancement mechanism that incorporates non-rigid deformation modeling and texture remapping.<n>Our method achieves an average attack success rate (ASR) of 70.13% on YOLOv12 in physical scenarios.
arXiv Detail & Related papers (2025-05-22T08:54:03Z) - Poison-splat: Computation Cost Attack on 3D Gaussian Splatting [90.88713193520917]
We reveal a significant security vulnerability that has been largely overlooked in 3DGS.<n>The adversary can poison the input images to drastically increase the computation memory and time needed for 3DGS training.<n>Such a computation cost attack is achieved by addressing a bi-level optimization problem through three tailored strategies.
arXiv Detail & Related papers (2024-10-10T17:57:29Z) - Transient Adversarial 3D Projection Attacks on Object Detection in Autonomous Driving [15.516055760190884]
We introduce an adversarial 3D projection attack specifically targeting object detection in autonomous driving scenarios.
Our results demonstrate the effectiveness of the proposed attack in deceiving YOLOv3 and Mask R-CNN in physical settings.
arXiv Detail & Related papers (2024-09-25T22:27:11Z) - AdvMono3D: Advanced Monocular 3D Object Detection with Depth-Aware
Robust Adversarial Training [64.14759275211115]
We propose a depth-aware robust adversarial training method for monocular 3D object detection, dubbed DART3D.
Our adversarial training approach capitalizes on the inherent uncertainty, enabling the model to significantly improve its robustness against adversarial attacks.
arXiv Detail & Related papers (2023-09-03T07:05:32Z) - On the Real-World Adversarial Robustness of Real-Time Semantic
Segmentation Models for Autonomous Driving [59.33715889581687]
The existence of real-world adversarial examples (commonly in the form of patches) poses a serious threat for the use of deep learning models in safety-critical computer vision tasks.
This paper presents an evaluation of the robustness of semantic segmentation models when attacked with different types of adversarial patches.
A novel loss function is proposed to improve the capabilities of attackers in inducing a misclassification of pixels.
arXiv Detail & Related papers (2022-01-05T22:33:43Z) - Patch Attack Invariance: How Sensitive are Patch Attacks to 3D Pose? [7.717537870226507]
We develop a new metric called mean Attack Success over Transformations (mAST) to evaluate patch attack robustness and invariance.
We conduct a sensitivity analysis which provides important qualitative insights into attack effectiveness as a function of the 3D pose of a patch relative to the camera.
We provide new insights into the existence of a fundamental cutoff limit in patch attack effectiveness that depends on the extent of out-of-plane rotation angles.
arXiv Detail & Related papers (2021-08-16T17:02:38Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - Exploring Adversarial Robustness of Multi-Sensor Perception Systems in
Self Driving [87.3492357041748]
In this paper, we showcase practical susceptibilities of multi-sensor detection by placing an adversarial object on top of a host vehicle.
Our experiments demonstrate that successful attacks are primarily caused by easily corrupted image features.
Towards more robust multi-modal perception systems, we show that adversarial training with feature denoising can boost robustness to such attacks significantly.
arXiv Detail & Related papers (2021-01-17T21:15:34Z) - Spatiotemporal Attacks for Embodied Agents [119.43832001301041]
We take the first step to study adversarial attacks for embodied agents.
In particular, we generate adversarial examples, which exploit the interaction history in both the temporal and spatial dimensions.
Our perturbations have strong attack and generalization abilities.
arXiv Detail & Related papers (2020-05-19T01:38:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.