Shielding the Unseen: Privacy Protection through Poisoning NeRF with
Spatial Deformation
- URL: http://arxiv.org/abs/2310.03125v1
- Date: Wed, 4 Oct 2023 19:35:56 GMT
- Title: Shielding the Unseen: Privacy Protection through Poisoning NeRF with
Spatial Deformation
- Authors: Yihan Wu, Brandon Y. Feng, Heng Huang
- Abstract summary: We introduce an innovative method of safeguarding user privacy against the generative capabilities of Neural Radiance Fields (NeRF) models.
Our novel poisoning attack method induces changes to observed views that are imperceptible to the human eye, yet potent enough to disrupt NeRF's ability to accurately reconstruct a 3D scene.
We extensively test our approach on two common NeRF benchmark datasets consisting of 29 real-world scenes with high-quality images.
- Score: 59.302770084115814
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we introduce an innovative method of safeguarding user privacy
against the generative capabilities of Neural Radiance Fields (NeRF) models.
Our novel poisoning attack method induces changes to observed views that are
imperceptible to the human eye, yet potent enough to disrupt NeRF's ability to
accurately reconstruct a 3D scene. To achieve this, we devise a bi-level
optimization algorithm incorporating a Projected Gradient Descent (PGD)-based
spatial deformation. We extensively test our approach on two common NeRF
benchmark datasets consisting of 29 real-world scenes with high-quality images.
Our results compellingly demonstrate that our privacy-preserving method
significantly impairs NeRF's performance across these benchmark datasets.
Additionally, we show that our method is adaptable and versatile, functioning
across various perturbation strengths and NeRF architectures. This work offers
valuable insights into NeRF's vulnerabilities and emphasizes the need to
account for such potential privacy risks when developing robust 3D scene
reconstruction algorithms. Our study contributes to the larger conversation
surrounding responsible AI and generative machine learning, aiming to protect
user privacy and respect creative ownership in the digital age.
Related papers
- SG-NeRF: Neural Surface Reconstruction with Scene Graph Optimization [16.460851701725392]
We present a novel approach that optimize radiance fields with scene graphs to mitigate the influence of outlier poses.
Our method incorporates an adaptive inlier-outlier confidence estimation scheme based on scene graphs.
We also introduce an effective intersection-over-union (IoU) loss to optimize the camera pose and surface geometry.
arXiv Detail & Related papers (2024-07-17T15:50:17Z) - SCINeRF: Neural Radiance Fields from a Snapshot Compressive Image [19.58894449169074]
Snapshot Compressive Imaging (SCI) technique for recovering the underlying 3D scene representation from a single temporal compressed image.
We formulate the physical imaging process of SCI as part of the training of neural radiance fields (NeRF)
Our proposed approach surpasses the state-of-the-art methods in terms of image reconstruction and novel view image synthesis.
arXiv Detail & Related papers (2024-03-29T07:14:14Z) - Self-Evolving Neural Radiance Fields [31.124406548504794]
We propose a novel framework, dubbed Self-Evolving Neural Radiance Fields (SE-NeRF), that applies a self-training framework to neural radiance field (NeRF)
We formulate few-shot NeRF into a teacher-student framework to guide the network to learn a more robust representation of the scene.
We show and evaluate that applying our self-training framework to existing models improves the quality of the rendered images and achieves state-of-the-art performance in multiple settings.
arXiv Detail & Related papers (2023-12-02T02:28:07Z) - Leveraging Neural Radiance Fields for Uncertainty-Aware Visual
Localization [56.95046107046027]
We propose to leverage Neural Radiance Fields (NeRF) to generate training samples for scene coordinate regression.
Despite NeRF's efficiency in rendering, many of the rendered data are polluted by artifacts or only contain minimal information gain.
arXiv Detail & Related papers (2023-10-10T20:11:13Z) - Clean-NeRF: Reformulating NeRF to account for View-Dependent
Observations [67.54358911994967]
This paper proposes Clean-NeRF for accurate 3D reconstruction and novel view rendering in complex scenes.
Clean-NeRF can be implemented as a plug-in that can immediately benefit existing NeRF-based methods without additional input.
arXiv Detail & Related papers (2023-03-26T12:24:31Z) - Benchmarking Robustness in Neural Radiance Fields [22.631924719238963]
We analyze the robustness of NeRF-based novel view synthesis algorithms in the presence of different types of corruptions.
We find that NeRF-based models are significantly degraded in the presence of corruption, and are more sensitive to a different set of corruptions than image recognition models.
arXiv Detail & Related papers (2023-01-10T17:01:12Z) - NeRFInvertor: High Fidelity NeRF-GAN Inversion for Single-shot Real
Image Animation [66.0838349951456]
Nerf-based Generative models have shown impressive capacity in generating high-quality images with consistent 3D geometry.
We propose a universal method to surgically fine-tune these NeRF-GAN models in order to achieve high-fidelity animation of real subjects only by a single image.
arXiv Detail & Related papers (2022-11-30T18:36:45Z) - AE-NeRF: Auto-Encoding Neural Radiance Fields for 3D-Aware Object
Manipulation [24.65896451569795]
We propose a novel framework for 3D-aware object manipulation, called Auto-aware Neural Radiance Fields (AE-NeRF)
Our model is formulated in an auto-encoder architecture, extracts disentangled 3D attributes such as 3D shape, appearance, and camera pose from an image.
A high-quality image is rendered from the attributes through disentangled generative Neural Radiance Fields (NeRF)
arXiv Detail & Related papers (2022-04-28T11:50:18Z) - On the Robustness of Quality Measures for GANs [136.18799984346248]
This work evaluates the robustness of quality measures of generative models such as Inception Score (IS) and Fr'echet Inception Distance (FID)
We show that such metrics can also be manipulated by additive pixel perturbations.
arXiv Detail & Related papers (2022-01-31T06:43:09Z) - Attribute-Guided Adversarial Training for Robustness to Natural
Perturbations [64.35805267250682]
We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space.
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.
arXiv Detail & Related papers (2020-12-03T10:17:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.