High-fidelity 3D Reconstruction of Plants using Neural Radiance Field
- URL: http://arxiv.org/abs/2311.04154v1
- Date: Tue, 7 Nov 2023 17:31:27 GMT
- Title: High-fidelity 3D Reconstruction of Plants using Neural Radiance Field
- Authors: Kewei Hu, Ying Wei, Yaoqiang Pan, Hanwen Kang, Chao Chen
- Abstract summary: We present a novel plant dataset comprising real plant images from production environments.
This dataset is a first-of-its-kind initiative aimed at comprehensively exploring the advantages and limitations of NeRF in agricultural contexts.
- Score: 10.245620447865456
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurate reconstruction of plant phenotypes plays a key role in optimising
sustainable farming practices in the field of Precision Agriculture (PA).
Currently, optical sensor-based approaches dominate the field, but the need for
high-fidelity 3D reconstruction of crops and plants in unstructured
agricultural environments remains challenging. Recently, a promising
development has emerged in the form of Neural Radiance Field (NeRF), a novel
method that utilises neural density fields. This technique has shown impressive
performance in various novel vision synthesis tasks, but has remained
relatively unexplored in the agricultural context. In our study, we focus on
two fundamental tasks within plant phenotyping: (1) the synthesis of 2D
novel-view images and (2) the 3D reconstruction of crop and plant models. We
explore the world of neural radiance fields, in particular two SOTA methods:
Instant-NGP, which excels in generating high-quality images with impressive
training and inference speed, and Instant-NSR, which improves the reconstructed
geometry by incorporating the Signed Distance Function (SDF) during training.
In particular, we present a novel plant phenotype dataset comprising real plant
images from production environments. This dataset is a first-of-its-kind
initiative aimed at comprehensively exploring the advantages and limitations of
NeRF in agricultural contexts. Our experimental results show that NeRF
demonstrates commendable performance in the synthesis of novel-view images and
is able to achieve reconstruction results that are competitive with Reality
Capture, a leading commercial software for 3D Multi-View Stereo (MVS)-based
reconstruction. However, our study also highlights certain drawbacks of NeRF,
including relatively slow training speeds, performance limitations in cases of
insufficient sampling, and challenges in obtaining geometry quality in complex
setups.
Related papers
- Evaluating Modern Approaches in 3D Scene Reconstruction: NeRF vs Gaussian-Based Methods [4.6836510920448715]
This study explores the capabilities of Neural Radiance Fields (NeRF) and Gaussian-based methods in the context of 3D scene reconstruction.
We assess performance based on tracking accuracy, mapping fidelity, and view synthesis.
Findings reveal that NeRF excels in view synthesis, offering unique capabilities in generating new perspectives from existing data.
arXiv Detail & Related papers (2024-08-08T07:11:57Z) - IOVS4NeRF:Incremental Optimal View Selection for Large-Scale NeRFs [3.9248546555042365]
This paper introduces an innovative incremental optimal view selection framework, IOVS4NeRF, designed to model a 3D scene within a restricted input budget.
By selecting views that offer the highest information gain, the quality of novel view synthesis can be enhanced with minimal additional resources.
arXiv Detail & Related papers (2024-07-26T09:11:25Z) - 3D Reconstruction and New View Synthesis of Indoor Environments based on a Dual Neural Radiance Field [17.709306549110153]
We develop a dual neural radiance field (Du-NeRF) to simultaneously achieve high-quality geometry reconstruction and view rendering.
One of the innovative features of Du-NeRF is that it decouples a view-independent component from the density field and uses it as a label to supervise the learning process of the SDF field.
arXiv Detail & Related papers (2024-01-26T09:21:46Z) - ReconFusion: 3D Reconstruction with Diffusion Priors [104.73604630145847]
We present ReconFusion to reconstruct real-world scenes using only a few photos.
Our approach leverages a diffusion prior for novel view synthesis, trained on synthetic and multiview datasets.
Our method synthesizes realistic geometry and texture in underconstrained regions while preserving the appearance of observed regions.
arXiv Detail & Related papers (2023-12-05T18:59:58Z) - ResFields: Residual Neural Fields for Spatiotemporal Signals [61.44420761752655]
ResFields is a novel class of networks specifically designed to effectively represent complex temporal signals.
We conduct comprehensive analysis of the properties of ResFields and propose a matrix factorization technique to reduce the number of trainable parameters.
We demonstrate the practical utility of ResFields by showcasing its effectiveness in capturing dynamic 3D scenes from sparse RGBD cameras.
arXiv Detail & Related papers (2023-09-06T16:59:36Z) - Neural Radiance Fields (NeRFs): A Review and Some Recent Developments [0.0]
Neural Radiance Field (NeRF) is a framework that represents a 3D scene in the weights of a fully connected neural network.
NeRFs have become a popular field of research as recent developments have been made that expand the performance and capabilities of the base framework.
arXiv Detail & Related papers (2023-04-30T03:23:58Z) - DehazeNeRF: Multiple Image Haze Removal and 3D Shape Reconstruction
using Neural Radiance Fields [56.30120727729177]
We introduce DehazeNeRF as a framework that robustly operates in hazy conditions.
We demonstrate successful multi-view haze removal, novel view synthesis, and 3D shape reconstruction where existing approaches fail.
arXiv Detail & Related papers (2023-03-20T18:03:32Z) - Factor Fields: A Unified Framework for Neural Fields and Beyond [50.29013417187368]
We present Factor Fields, a novel framework for modeling and representing signals.
Our framework accommodates several recent signal representations including NeRF, Plenoxels, EG3D, Instant-NGP, and TensoRF.
Our representation achieves better image approximation quality on 2D image regression tasks, higher geometric quality when reconstructing 3D signed distance fields, and higher compactness for radiance field reconstruction tasks.
arXiv Detail & Related papers (2023-02-02T17:06:50Z) - CLONeR: Camera-Lidar Fusion for Occupancy Grid-aided Neural
Representations [77.90883737693325]
This paper proposes CLONeR, which significantly improves upon NeRF by allowing it to model large outdoor driving scenes observed from sparse input sensor views.
This is achieved by decoupling occupancy and color learning within the NeRF framework into separate Multi-Layer Perceptrons (MLPs) trained using LiDAR and camera data, respectively.
In addition, this paper proposes a novel method to build differentiable 3D Occupancy Grid Maps (OGM) alongside the NeRF model, and leverage this occupancy grid for improved sampling of points along a ray for rendering in metric space.
arXiv Detail & Related papers (2022-09-02T17:44:50Z) - Neural 3D Reconstruction in the Wild [86.6264706256377]
We introduce a new method that enables efficient and accurate surface reconstruction from Internet photo collections.
We present a new benchmark and protocol for evaluating reconstruction performance on such in-the-wild scenes.
arXiv Detail & Related papers (2022-05-25T17:59:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.