Neural radiance fields in the industrial and robotics domain:
applications, research opportunities and use cases
- URL: http://arxiv.org/abs/2308.07118v2
- Date: Wed, 16 Aug 2023 10:46:35 GMT
- Title: Neural radiance fields in the industrial and robotics domain:
applications, research opportunities and use cases
- Authors: Eugen \v{S}lapak, Enric Pardo, Mat\'u\v{s} Dopiriak, Taras Maksymyuk
and Juraj Gazda
- Abstract summary: neural radiance fields (NeRFs) have emerged as a promising approach for learning 3D scene representations based on provided training 2D images.
We present a series of proof-of-concept experiments that demonstrate the potential of NeRFs in the industrial domain.
- Score: 0.9642500063568189
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The proliferation of technologies, such as extended reality (XR), has
increased the demand for high-quality three-dimensional (3D) graphical
representations. Industrial 3D applications encompass computer-aided design
(CAD), finite element analysis (FEA), scanning, and robotics. However, current
methods employed for industrial 3D representations suffer from high
implementation costs and reliance on manual human input for accurate 3D
modeling. To address these challenges, neural radiance fields (NeRFs) have
emerged as a promising approach for learning 3D scene representations based on
provided training 2D images. Despite a growing interest in NeRFs, their
potential applications in various industrial subdomains are still unexplored.
In this paper, we deliver a comprehensive examination of NeRF industrial
applications while also providing direction for future research endeavors. We
also present a series of proof-of-concept experiments that demonstrate the
potential of NeRFs in the industrial domain. These experiments include
NeRF-based video compression techniques and using NeRFs for 3D motion
estimation in the context of collision avoidance. In the video compression
experiment, our results show compression savings up to 48\% and 74\% for
resolutions of 1920x1080 and 300x168, respectively. The motion estimation
experiment used a 3D animation of a robotic arm to train Dynamic-NeRF (D-NeRF)
and achieved an average peak signal-to-noise ratio (PSNR) of disparity map with
the value of 23 dB and an structural similarity index measure (SSIM) 0.97.
Related papers
- GHNeRF: Learning Generalizable Human Features with Efficient Neural Radiance Fields [12.958200963257381]
We introduce GHNeRF, designed to address limitations by learning 2D/3D joint locations of human subjects with NeRF representation.
GHNeRF uses a pre-trained 2D encoder streamlined to extract essential human features from 2D images, which are then incorporated into the NeRF framework.
Our results show that GHNeRF can achieve state-of-the-art results in near real-time.
arXiv Detail & Related papers (2024-04-09T12:11:25Z) - NeRF-MAE: Masked AutoEncoders for Self-Supervised 3D Representation Learning for Neural Radiance Fields [57.617972778377215]
We show how to generate effective 3D representations from posed RGB images.
We pretrain this representation at scale on our proposed curated posed-RGB data, totaling over 1.8 million images.
Our novel self-supervised pretraining for NeRFs, NeRF-MAE, scales remarkably well and improves performance on various challenging 3D tasks.
arXiv Detail & Related papers (2024-04-01T17:59:55Z) - Hyperspectral Neural Radiance Fields [11.485829401765521]
We propose a hyperspectral 3D reconstruction using Neural Radiance Fields (NeRFs)
NeRFs have seen widespread success in creating high quality volumetric 3D representations of scenes captured by a variety of camera models.
We show that our hyperspectral NeRF approach enables creating fast, accurate volumetric 3D hyperspectral scenes.
arXiv Detail & Related papers (2024-03-21T21:18:08Z) - NeRF-Det: Learning Geometry-Aware Volumetric Representation for
Multi-View 3D Object Detection [65.02633277884911]
We present NeRF-Det, a novel method for indoor 3D detection with posed RGB images as input.
Our method makes use of NeRF in an end-to-end manner to explicitly estimate 3D geometry, thereby improving 3D detection performance.
arXiv Detail & Related papers (2023-07-27T04:36:16Z) - BeyondPixels: A Comprehensive Review of the Evolution of Neural Radiance Fields [1.1531932979578041]
NeRF, short for Neural Radiance Fields, is a recent innovation that uses AI algorithms to create 3D objects from 2D images.
This survey reviews recent advances in NeRF and categorizes them according to their architectural designs.
arXiv Detail & Related papers (2023-06-05T16:10:21Z) - Neural Radiance Fields: Past, Present, and Future [0.0]
An attempt made by Mildenhall et al in their paper about NeRFs led to a boom in Computer Graphics, Robotics, Computer Vision, and the possible scope of High-Resolution Low Storage Augmented Reality and Virtual Reality-based 3D models have gained traction from res with more than 1000 preprints related to NeRFs published.
This survey provides the history of rendering, Implicit Learning, and NeRFs, the progression of research on NeRFs, and the potential applications and implications of NeRFs in today's world.
arXiv Detail & Related papers (2023-04-20T02:17:08Z) - NeRF-GAN Distillation for Efficient 3D-Aware Generation with
Convolutions [97.27105725738016]
integration of Neural Radiance Fields (NeRFs) and generative models, such as Generative Adversarial Networks (GANs) has transformed 3D-aware generation from single-view images.
We propose a simple and effective method, based on re-using the well-disentangled latent space of a pre-trained NeRF-GAN in a pose-conditioned convolutional network to directly generate 3D-consistent images corresponding to the underlying 3D representations.
arXiv Detail & Related papers (2023-03-22T18:59:48Z) - NerfDiff: Single-image View Synthesis with NeRF-guided Distillation from
3D-aware Diffusion [107.67277084886929]
Novel view synthesis from a single image requires inferring occluded regions of objects and scenes whilst simultaneously maintaining semantic and physical consistency with the input.
We propose NerfDiff, which addresses this issue by distilling the knowledge of a 3D-aware conditional diffusion model (CDM) into NeRF through synthesizing and refining a set of virtual views at test time.
We further propose a novel NeRF-guided distillation algorithm that simultaneously generates 3D consistent virtual views from the CDM samples, and finetunes the NeRF based on the improved virtual views.
arXiv Detail & Related papers (2023-02-20T17:12:00Z) - 3D Reconstruction of Non-cooperative Resident Space Objects using
Instant NGP-accelerated NeRF and D-NeRF [0.0]
This work adapts Instant NeRF and D-NeRF, variations of the neural radiance field (NeRF) algorithm to the problem of mapping RSOs in orbit.
The algorithms are evaluated for 3D reconstruction quality and hardware requirements using datasets of images of a spacecraft mock-up.
arXiv Detail & Related papers (2023-01-22T05:26:08Z) - DRaCoN -- Differentiable Rasterization Conditioned Neural Radiance
Fields for Articulated Avatars [92.37436369781692]
We present DRaCoN, a framework for learning full-body volumetric avatars.
It exploits the advantages of both the 2D and 3D neural rendering techniques.
Experiments on the challenging ZJU-MoCap and Human3.6M datasets indicate that DRaCoN outperforms state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T17:59:15Z) - Mega-NeRF: Scalable Construction of Large-Scale NeRFs for Virtual
Fly-Throughs [54.41204057689033]
We explore how to leverage neural fields (NeRFs) to build interactive 3D environments from large-scale visual captures spanning buildings or even multiple city blocks collected primarily from drone data.
In contrast to the single object scenes against which NeRFs have been traditionally evaluated, this setting poses multiple challenges.
We introduce a simple clustering algorithm that partitions training images (or rather pixels) into different NeRF submodules that can be trained in parallel.
arXiv Detail & Related papers (2021-12-20T17:40:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.