Evaluating NeRFs for 3D Plant Geometry Reconstruction in Field
Conditions
- URL: http://arxiv.org/abs/2402.10344v1
- Date: Thu, 15 Feb 2024 22:17:17 GMT
- Title: Evaluating NeRFs for 3D Plant Geometry Reconstruction in Field
Conditions
- Authors: Muhammad Arbab Arshad, Talukder Jubery, James Afful, Anushrut Jignasu,
Aditya Balu, Baskar Ganapathysubramanian, Soumik Sarkar, Adarsh Krishnamurthy
- Abstract summary: We evaluate different Neural Radiance Fields (NeRFs) techniques for reconstructing 3D plants in varied environments.
NeRFs achieve a 74.65% F1 score with 30 minutes of training on the GPU, highlighting the efficiency and accuracy of NeRFs in challenging environments.
- Score: 10.199205707001436
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We evaluate different Neural Radiance Fields (NeRFs) techniques for
reconstructing (3D) plants in varied environments, from indoor settings to
outdoor fields. Traditional techniques often struggle to capture the complex
details of plants, which is crucial for botanical and agricultural
understanding. We evaluate three scenarios with increasing complexity and
compare the results with the point cloud obtained using LiDAR as ground truth
data. In the most realistic field scenario, the NeRF models achieve a 74.65% F1
score with 30 minutes of training on the GPU, highlighting the efficiency and
accuracy of NeRFs in challenging environments. These findings not only
demonstrate the potential of NeRF in detailed and realistic 3D plant modeling
but also suggest practical approaches for enhancing the speed and efficiency of
the 3D reconstruction process.
Related papers
- Mesh2NeRF: Direct Mesh Supervision for Neural Radiance Field Representation and Generation [51.346733271166926]
Mesh2NeRF is an approach to derive ground-truth radiance fields from textured meshes for 3D generation tasks.
We validate the effectiveness of Mesh2NeRF across various tasks.
arXiv Detail & Related papers (2024-03-28T11:22:53Z) - Exploring Accurate 3D Phenotyping in Greenhouse through Neural Radiance Fields [6.283117378688054]
Traditional phenotyping in controlled laboratory environments, while valuable, falls short in understanding plant growth under real-world conditions.
This study investigates a learning-based phenotyping method using the Neural Radiance Field to achieve accurate in-situ phenotyping of pepper plants in greenhouse environments.
arXiv Detail & Related papers (2024-03-24T02:15:14Z) - ReconFusion: 3D Reconstruction with Diffusion Priors [104.73604630145847]
We present ReconFusion to reconstruct real-world scenes using only a few photos.
Our approach leverages a diffusion prior for novel view synthesis, trained on synthetic and multiview datasets.
Our method synthesizes realistic geometry and texture in underconstrained regions while preserving the appearance of observed regions.
arXiv Detail & Related papers (2023-12-05T18:59:58Z) - High-fidelity 3D Reconstruction of Plants using Neural Radiance Field [10.245620447865456]
We present a novel plant dataset comprising real plant images from production environments.
This dataset is a first-of-its-kind initiative aimed at comprehensively exploring the advantages and limitations of NeRF in agricultural contexts.
arXiv Detail & Related papers (2023-11-07T17:31:27Z) - HarmonicNeRF: Geometry-Informed Synthetic View Augmentation for 3D Scene Reconstruction in Driving Scenarios [2.949710700293865]
HarmonicNeRF is a novel approach for outdoor self-supervised monocular scene reconstruction.
It capitalizes on the strengths of NeRF and enhances surface reconstruction accuracy by augmenting the input space with geometry-informed synthetic views.
Our approach establishes new benchmarks in synthesizing novel depth views and reconstructing scenes, significantly outperforming existing methods.
arXiv Detail & Related papers (2023-10-09T07:42:33Z) - LiDAR-NeRF: Novel LiDAR View Synthesis via Neural Radiance Fields [112.62936571539232]
We introduce a new task, novel view synthesis for LiDAR sensors.
Traditional model-based LiDAR simulators with style-transfer neural networks can be applied to render novel views.
We use a neural radiance field (NeRF) to facilitate the joint learning of geometry and the attributes of 3D points.
arXiv Detail & Related papers (2023-04-20T15:44:37Z) - Clean-NeRF: Reformulating NeRF to account for View-Dependent
Observations [67.54358911994967]
This paper proposes Clean-NeRF for accurate 3D reconstruction and novel view rendering in complex scenes.
Clean-NeRF can be implemented as a plug-in that can immediately benefit existing NeRF-based methods without additional input.
arXiv Detail & Related papers (2023-03-26T12:24:31Z) - DehazeNeRF: Multiple Image Haze Removal and 3D Shape Reconstruction
using Neural Radiance Fields [56.30120727729177]
We introduce DehazeNeRF as a framework that robustly operates in hazy conditions.
We demonstrate successful multi-view haze removal, novel view synthesis, and 3D shape reconstruction where existing approaches fail.
arXiv Detail & Related papers (2023-03-20T18:03:32Z) - AligNeRF: High-Fidelity Neural Radiance Fields via Alignment-Aware
Training [100.33713282611448]
We conduct the first pilot study on training NeRF with high-resolution data.
We propose the corresponding solutions, including marrying the multilayer perceptron with convolutional layers.
Our approach is nearly free without introducing obvious training/testing costs.
arXiv Detail & Related papers (2022-11-17T17:22:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.