Adversarial Attacks Using Differentiable Rendering: A Survey
- URL: http://arxiv.org/abs/2411.09749v1
- Date: Thu, 14 Nov 2024 19:03:11 GMT
- Title: Adversarial Attacks Using Differentiable Rendering: A Survey
- Authors: Matthew Hull, Chao Zhang, Zsolt Kira, Duen Horng Chau,
- Abstract summary: Differentiable rendering methods have emerged as a promising means for generating photo-realistic and physically plausible adversarial attacks.
Our survey helps researchers and practitioners better understand the vulnerabilities of computer vision systems against adversarial attacks.
- Score: 41.133107951792496
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Differentiable rendering methods have emerged as a promising means for generating photo-realistic and physically plausible adversarial attacks by manipulating 3D objects and scenes that can deceive deep neural networks (DNNs). Recently, differentiable rendering capabilities have evolved significantly into a diverse landscape of libraries, such as Mitsuba, PyTorch3D, and methods like Neural Radiance Fields and 3D Gaussian Splatting for solving inverse rendering problems that share conceptually similar properties commonly used to attack DNNs, such as back-propagation and optimization. However, the adversarial machine learning research community has not yet fully explored or understood such capabilities for generating attacks. Some key reasons are that researchers often have different attack goals, such as misclassification or misdetection, and use different tasks to accomplish these goals by manipulating different representation in a scene, such as the mesh or texture of an object. This survey adopts a task-oriented unifying framework that systematically summarizes common tasks, such as manipulating textures, altering illumination, and modifying 3D meshes to exploit vulnerabilities in DNNs. Our framework enables easy comparison of existing works, reveals research gaps and spotlights exciting future research directions in this rapidly evolving field. Through focusing on how these tasks enable attacks on various DNNs such as image classification, facial recognition, object detection, optical flow and depth estimation, our survey helps researchers and practitioners better understand the vulnerabilities of computer vision systems against photorealistic adversarial attacks that could threaten real-world applications.
Related papers
- AdvReal: Adversarial Patch Generation Framework with Application to Adversarial Safety Evaluation of Object Detection Systems [3.1980492662961515]
In this study, we propose a unified joint adversarial training framework for both 2D and 3D samples.<n>We introduce an adversarial sample reality enhancement approach that incorporates non-rigid surface modeling and a realistic 3D matching mechanism.
arXiv Detail & Related papers (2025-05-22T08:54:03Z) - MetaScenes: Towards Automated Replica Creation for Real-world 3D Scans [76.39726619818896]
Embodied AI (EAI) research requires high-quality, diverse 3D scenes to support skill acquisition, sim-to-real transfer, and generalization.<n>Existing datasets demonstrate that this process heavily relies on artist-driven designs.<n>We present MetaScenes, a large-scale, simulatable 3D scene dataset constructed from real-world scans.
arXiv Detail & Related papers (2025-05-05T06:13:25Z) - SIGHT: Synthesizing Image-Text Conditioned and Geometry-Guided 3D Hand-Object Trajectories [124.24041272390954]
Modeling hand-object interaction priors holds significant potential to advance robotic and embodied AI systems.<n>We introduce SIGHT, a novel task focused on generating realistic and physically plausible 3D hand-object interaction trajectories from a single image.<n>We propose SIGHT-Fusion, a novel diffusion-based image-text conditioned generative model that tackles this task by retrieving the most similar 3D object mesh from a database.
arXiv Detail & Related papers (2025-03-28T20:53:20Z) - Targeted View-Invariant Adversarial Perturbations for 3D Object Recognition [1.7205106391379021]
Adversarial attacks pose significant challenges in 3D object recognition.
This paper introduces View-Invariant Adversarial Perturbations (VIAP), a novel method for crafting robust adversarial examples.
We demonstrate the effectiveness of VIAP in both targeted and untargeted settings.
arXiv Detail & Related papers (2024-12-17T23:23:25Z) - Enhancing Generalizability of Representation Learning for Data-Efficient 3D Scene Understanding [50.448520056844885]
We propose a generative Bayesian network to produce diverse synthetic scenes with real-world patterns.
A series of experiments robustly display our method's consistent superiority over existing state-of-the-art pre-training approaches.
arXiv Detail & Related papers (2024-06-17T07:43:53Z) - ZoomNeXt: A Unified Collaborative Pyramid Network for Camouflaged Object Detection [70.11264880907652]
Recent object (COD) attempts to segment objects visually blended into their surroundings, which is extremely complex and difficult in real-world scenarios.
We propose an effective unified collaborative pyramid network that mimics human behavior when observing vague images and camouflaged zooming in and out.
Our framework consistently outperforms existing state-of-the-art methods in image and video COD benchmarks.
arXiv Detail & Related papers (2023-10-31T06:11:23Z) - A Survey on Transferability of Adversarial Examples across Deep Neural Networks [53.04734042366312]
adversarial examples can manipulate machine learning models into making erroneous predictions.
The transferability of adversarial examples enables black-box attacks which circumvent the need for detailed knowledge of the target model.
This survey explores the landscape of the adversarial transferability of adversarial examples.
arXiv Detail & Related papers (2023-10-26T17:45:26Z) - 3D objects and scenes classification, recognition, segmentation, and
reconstruction using 3D point cloud data: A review [5.85206759397617]
Three-dimensional (3D) point cloud analysis has become one of the attractive subjects in realistic imaging and machine visions.
A significant effort has recently been devoted to developing novel strategies, using different techniques such as deep learning models.
Various tasks performed on 3D point could data are investigated, including objects and scenes detection, recognition, segmentation and reconstruction.
arXiv Detail & Related papers (2023-06-09T15:45:23Z) - ArK: Augmented Reality with Knowledge Interactive Emergent Ability [115.72679420999535]
We develop an infinite agent that learns to transfer knowledge memory from general foundation models to novel domains.
The heart of our approach is an emerging mechanism, dubbed Augmented Reality with Knowledge Inference Interaction (ArK)
We show that our ArK approach, combined with large foundation models, significantly improves the quality of generated 2D/3D scenes.
arXiv Detail & Related papers (2023-05-01T17:57:01Z) - Mitigating Adversarial Attacks in Deepfake Detection: An Exploration of
Perturbation and AI Techniques [1.0718756132502771]
adversarial examples are subtle perturbations artfully injected into clean images or videos.
Deepfakes have emerged as a potent tool to manipulate public opinion and tarnish the reputations of public figures.
This article delves into the multifaceted world of adversarial examples, elucidating the underlying principles behind their capacity to deceive deep learning algorithms.
arXiv Detail & Related papers (2023-02-22T23:48:19Z) - Deviations in Representations Induced by Adversarial Attacks [0.0]
Research has shown that deep learning models are vulnerable to adversarial attacks.
This finding brought about a new direction in research, whereby algorithms were developed to attack and defend vulnerable networks.
We present a method for measuring and analyzing the deviations in representations induced by adversarial attacks.
arXiv Detail & Related papers (2022-11-07T17:40:08Z) - Differential Evolution based Dual Adversarial Camouflage: Fooling Human
Eyes and Object Detectors [0.190365714903665]
We propose a dual adversarial camouflage (DE_DAC) method, composed of two stages to fool human eyes and object detectors simultaneously.
In the first stage, we optimize the global texture to minimize the discrepancy between the rendered object and the scene images.
In the second stage, we design three loss functions to optimize the local texture, making object detectors ineffective.
arXiv Detail & Related papers (2022-10-17T09:07:52Z) - Uncertainty Guided Policy for Active Robotic 3D Reconstruction using
Neural Radiance Fields [82.21033337949757]
This paper introduces a ray-based volumetric uncertainty estimator, which computes the entropy of the weight distribution of the color samples along each ray of the object's implicit neural representation.
We show that it is possible to infer the uncertainty of the underlying 3D geometry given a novel view with the proposed estimator.
We present a next-best-view selection policy guided by the ray-based volumetric uncertainty in neural radiance fields-based representations.
arXiv Detail & Related papers (2022-09-17T21:28:57Z) - Object Scene Representation Transformer [56.40544849442227]
We introduce Object Scene Representation Transformer (OSRT), a 3D-centric model in which individual object representations naturally emerge through novel view synthesis.
OSRT scales to significantly more complex scenes with larger diversity of objects and backgrounds than existing methods.
It is multiple orders of magnitude faster at compositional rendering thanks to its light field parametrization and the novel Slot Mixer decoder.
arXiv Detail & Related papers (2022-06-14T15:40:47Z) - Beyond RGB: Scene-Property Synthesis with Neural Radiance Fields [32.200557554874784]
This paper provides a new approach to scene understanding, by leveraging the recent progress on implicit 3D representation and neural rendering.
Building upon the great success of Neural Radiance Fields (NeRFs), we introduce Scene-Property Synthesis with NeRF.
We facilitate addressing a variety of scene understanding tasks under a unified framework, including semantic segmentation, surface normal estimation, reshading, keypoint detection, and edge detection.
arXiv Detail & Related papers (2022-06-09T17:59:50Z) - MetaGraspNet: A Large-Scale Benchmark Dataset for Vision-driven Robotic
Grasping via Physics-based Metaverse Synthesis [78.26022688167133]
We present a large-scale benchmark dataset for vision-driven robotic grasping via physics-based metaverse synthesis.
The proposed dataset contains 100,000 images and 25 different object types.
We also propose a new layout-weighted performance metric alongside the dataset for evaluating object detection and segmentation performance.
arXiv Detail & Related papers (2021-12-29T17:23:24Z) - Messing Up 3D Virtual Environments: Transferable Adversarial 3D Objects [21.86544028303682]
We study how to craft adversarial 3D objects by altering their textures, using a tool chain composed of easily accessible elements.
We show that it is possible, and indeed simple, to create adversarial objects using off-the-shelf limited surrogates.
We propose a saliency-based attack that intersects the two classes of adversarials in order to focus the alteration to those texture elements that are estimated to be effective in the target engine.
arXiv Detail & Related papers (2021-09-17T11:06:23Z) - Physical world assistive signals for deep neural network classifiers --
neither defense nor attack [23.138996515998347]
We introduce the concept of Assistive Signals, which are optimized to improve a model's confidence score regardless if it's under attack or not.
Experimental evaluations show that the assistive signals generated by our optimization method increase the accuracy and confidence of deep models.
We discuss how we can exploit these insights to re-think, or avoid, some patterns that might contribute to, or degrade, the detectability of objects in the real-world.
arXiv Detail & Related papers (2021-05-03T04:02:48Z) - Exploring Adversarial Robustness of Multi-Sensor Perception Systems in
Self Driving [87.3492357041748]
In this paper, we showcase practical susceptibilities of multi-sensor detection by placing an adversarial object on top of a host vehicle.
Our experiments demonstrate that successful attacks are primarily caused by easily corrupted image features.
Towards more robust multi-modal perception systems, we show that adversarial training with feature denoising can boost robustness to such attacks significantly.
arXiv Detail & Related papers (2021-01-17T21:15:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.