NeRF-RPN: A general framework for object detection in NeRFs
- URL: http://arxiv.org/abs/2211.11646v3
- Date: Mon, 27 Mar 2023 16:40:30 GMT
- Title: NeRF-RPN: A general framework for object detection in NeRFs
- Authors: Benran Hu, Junkai Huang, Yichen Liu, Yu-Wing Tai, Chi-Keung Tang
- Abstract summary: NeRF-RPN aims to detect all bounding boxes of objects in a scene.
NeRF-RPN is a general framework and can be applied to detect objects without class labels.
- Score: 54.54613914831599
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents the first significant object detection framework,
NeRF-RPN, which directly operates on NeRF. Given a pre-trained NeRF model,
NeRF-RPN aims to detect all bounding boxes of objects in a scene. By exploiting
a novel voxel representation that incorporates multi-scale 3D neural volumetric
features, we demonstrate it is possible to regress the 3D bounding boxes of
objects in NeRF directly without rendering the NeRF at any viewpoint. NeRF-RPN
is a general framework and can be applied to detect objects without class
labels. We experimented NeRF-RPN with various backbone architectures, RPN head
designs and loss functions. All of them can be trained in an end-to-end manner
to estimate high quality 3D bounding boxes. To facilitate future research in
object detection for NeRF, we built a new benchmark dataset which consists of
both synthetic and real-world data with careful labeling and clean up. Code and
dataset are available at https://github.com/lyclyc52/NeRF_RPN.
Related papers
- Explicit-NeRF-QA: A Quality Assessment Database for Explicit NeRF Model Compression [10.469092315640696]
We construct a new dataset, called Explicit-NeRF-QA, to address the challenge of the NeRF compression study.
We use 22 3D objects with diverse geometries, textures, and material complexities to train four typical explicit NeRF models.
A subjective experiment with lab environment is conducted to collect subjective scores from 21 viewers.
arXiv Detail & Related papers (2024-07-11T04:02:05Z) - DReg-NeRF: Deep Registration for Neural Radiance Fields [66.69049158826677]
We propose DReg-NeRF to solve the NeRF registration problem on object-centric annotated scenes without human intervention.
Our proposed method beats the SOTA point cloud registration methods by a large margin.
arXiv Detail & Related papers (2023-08-18T08:37:49Z) - NeRF-Det: Learning Geometry-Aware Volumetric Representation for
Multi-View 3D Object Detection [65.02633277884911]
We present NeRF-Det, a novel method for indoor 3D detection with posed RGB images as input.
Our method makes use of NeRF in an end-to-end manner to explicitly estimate 3D geometry, thereby improving 3D detection performance.
arXiv Detail & Related papers (2023-07-27T04:36:16Z) - StegaNeRF: Embedding Invisible Information within Neural Radiance Fields [61.653702733061785]
We present StegaNeRF, a method for steganographic information embedding in NeRF renderings.
We design an optimization framework allowing accurate hidden information extractions from images rendered by NeRF.
StegaNeRF signifies an initial exploration into the novel problem of instilling customizable, imperceptible, and recoverable information to NeRF renderings.
arXiv Detail & Related papers (2022-12-03T12:14:19Z) - NeRF-Loc: Transformer-Based Object Localization Within Neural Radiance
Fields [62.89785701659139]
We propose a transformer-based framework, NeRF-Loc, to extract 3D bounding boxes of objects in NeRF scenes.
NeRF-Loc takes a pre-trained NeRF model and camera view as input and produces labeled, oriented 3D bounding boxes of objects as output.
arXiv Detail & Related papers (2022-09-24T18:34:22Z) - iNeRF: Inverting Neural Radiance Fields for Pose Estimation [68.91325516370013]
We present iNeRF, a framework that performs mesh-free pose estimation by "inverting" a Neural RadianceField (NeRF)
NeRFs have been shown to be remarkably effective for the task of view synthesis.
arXiv Detail & Related papers (2020-12-10T18:36:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.