Targeted Adversarial Attacks on Generalizable Neural Radiance Fields
- URL: http://arxiv.org/abs/2310.03578v1
- Date: Thu, 5 Oct 2023 14:59:18 GMT
- Title: Targeted Adversarial Attacks on Generalizable Neural Radiance Fields
- Authors: Andras Horvath, Csaba M. Jozsa
- Abstract summary: We present how generalizable NeRFs can be attacked by both low-intensity adversarial attacks and adversarial patches.
We also demonstrate targeted attacks, where a specific, predefined output scene is generated by these attacks with success.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural Radiance Fields (NeRFs) have recently emerged as a powerful tool for
3D scene representation and rendering. These data-driven models can learn to
synthesize high-quality images from sparse 2D observations, enabling realistic
and interactive scene reconstructions. However, the growing usage of NeRFs in
critical applications such as augmented reality, robotics, and virtual
environments could be threatened by adversarial attacks.
In this paper we present how generalizable NeRFs can be attacked by both
low-intensity adversarial attacks and adversarial patches, where the later
could be robust enough to be used in real world applications. We also
demonstrate targeted attacks, where a specific, predefined output scene is
generated by these attack with success.
Related papers
- AdvIRL: Reinforcement Learning-Based Adversarial Attacks on 3D NeRF Models [1.7205106391379021]
textitAdvIRL generates adversarial noise that remains robust under diverse 3D transformations.
Our approach is validated across a wide range of scenes, from small objects (e.g., bananas) to large environments (e.g., lighthouses)
arXiv Detail & Related papers (2024-12-18T01:01:30Z) - NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections [57.63028964831785]
Recent works have improved NeRF's ability to render detailed specular appearance of distant environment illumination, but are unable to synthesize consistent reflections of closer content.
We address these issues with an approach based on ray tracing.
Instead of querying an expensive neural network for the outgoing view-dependent radiance at points along each camera ray, our model casts rays from these points and traces them through the NeRF representation to render feature vectors.
arXiv Detail & Related papers (2024-05-23T17:59:57Z) - Closing the Visual Sim-to-Real Gap with Object-Composable NeRFs [59.12526668734703]
We introduce Composable Object Volume NeRF (COV-NeRF), an object-composable NeRF model that is the centerpiece of a real-to-sim pipeline.
COV-NeRF extracts objects from real images and composes them into new scenes, generating photorealistic renderings and many types of 2D and 3D supervision.
arXiv Detail & Related papers (2024-03-07T00:00:02Z) - Generating Visually Realistic Adversarial Patch [5.41648734119775]
A high-quality adversarial patch should be realistic, position irrelevant, and printable to be deployed in the physical world.
We propose an effective attack called VRAP, to generate visually realistic adversarial patches.
VRAP constrains the patch in the neighborhood of a real image to ensure the visual reality, optimize the patch at the poorest position for position irrelevance, and adopts Total Variance loss as well as gamma transformation to make the generated patch printable without losing information.
arXiv Detail & Related papers (2023-12-05T11:07:39Z) - Shielding the Unseen: Privacy Protection through Poisoning NeRF with
Spatial Deformation [59.302770084115814]
We introduce an innovative method of safeguarding user privacy against the generative capabilities of Neural Radiance Fields (NeRF) models.
Our novel poisoning attack method induces changes to observed views that are imperceptible to the human eye, yet potent enough to disrupt NeRF's ability to accurately reconstruct a 3D scene.
We extensively test our approach on two common NeRF benchmark datasets consisting of 29 real-world scenes with high-quality images.
arXiv Detail & Related papers (2023-10-04T19:35:56Z) - Adv3D: Generating 3D Adversarial Examples for 3D Object Detection in Driving Scenarios with NeRF [19.55666600076762]
Adv3D is the first exploration of modeling adversarial examples as Neural Radiance Fields (NeRFs)
NeRFs provide photorealistic appearances and 3D accurate generation, yielding a more realistic and realizable adversarial example.
We propose primitive-aware sampling and semantic-guided regularization that enable 3D patch attacks with camouflage adversarial texture.
arXiv Detail & Related papers (2023-09-04T04:29:01Z) - Enhance-NeRF: Multiple Performance Evaluation for Neural Radiance Fields [2.5432277893532116]
Neural Radiance Fields (NeRF) can generate realistic images from any viewpoint.
NeRF-based models are susceptible to interference issues caused by colored "fog" noise.
Our approach, coined Enhance-NeRF, adopts joint color to balance low and high reflectivity objects display.
arXiv Detail & Related papers (2023-06-08T15:49:30Z) - Shadows can be Dangerous: Stealthy and Effective Physical-world
Adversarial Attack by Natural Phenomenon [79.33449311057088]
We study a new type of optical adversarial examples, in which the perturbations are generated by a very common natural phenomenon, shadow.
We extensively evaluate the effectiveness of this new attack on both simulated and real-world environments.
arXiv Detail & Related papers (2022-03-08T02:40:18Z) - Generating Unrestricted 3D Adversarial Point Clouds [9.685291478330054]
deep learning for 3D point clouds is still vulnerable to adversarial attacks.
We propose an Adversarial Graph-Convolutional Generative Adversarial Network (AdvGCGAN) to generate realistic adversarial 3D point clouds.
arXiv Detail & Related papers (2021-11-17T08:30:18Z) - BARF: Bundle-Adjusting Neural Radiance Fields [104.97810696435766]
We propose Bundle-Adjusting Neural Radiance Fields (BARF) for training NeRF from imperfect camera poses.
BARF can effectively optimize the neural scene representations and resolve large camera pose misalignment at the same time.
This enables view synthesis and localization of video sequences from unknown camera poses, opening up new avenues for visual localization systems.
arXiv Detail & Related papers (2021-04-13T17:59:51Z) - Spatiotemporal Attacks for Embodied Agents [119.43832001301041]
We take the first step to study adversarial attacks for embodied agents.
In particular, we generate adversarial examples, which exploit the interaction history in both the temporal and spatial dimensions.
Our perturbations have strong attack and generalization abilities.
arXiv Detail & Related papers (2020-05-19T01:38:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.