Multi-view reconstruction of bullet time effect based on improved NSFF
model
- URL: http://arxiv.org/abs/2304.00330v1
- Date: Sat, 1 Apr 2023 14:58:00 GMT
- Title: Multi-view reconstruction of bullet time effect based on improved NSFF
model
- Authors: Linquan Yu and Yan Gao and Yangtian Yan and Wentao Zeng
- Abstract summary: Bullet time is a type of visual effect commonly used in film, television and games.
This paper reconstructed the common time special effects scenes in movies and television from a new perspective.
- Score: 2.5698815501864924
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bullet time is a type of visual effect commonly used in film, television and
games that makes time seem to slow down or stop while still preserving dynamic
details in the scene. It usually requires multiple sets of cameras to move
slowly with the subject and is synthesized using post-production techniques,
which is costly and one-time. The dynamic scene perspective reconstruction
technology based on neural rendering field can be used to solve this
requirement, but most of the current methods are poor in reconstruction
accuracy due to the blurred input image and overfitting of dynamic and static
regions. Based on the NSFF algorithm, this paper reconstructed the common time
special effects scenes in movies and television from a new perspective. To
improve the accuracy of the reconstructed images, fuzzy kernel was added to the
network for reconstruction and analysis of the fuzzy process, and the clear
perspective after analysis was input into the NSFF to improve the accuracy. By
using the optical flow prediction information to suppress the dynamic network
timely, the network is forced to improve the reconstruction effect of dynamic
and static networks independently, and the ability to understand and
reconstruct dynamic and static scenes is improved. To solve the overfitting
problem of dynamic and static scenes, a new dynamic and static cross entropy
loss is designed. Experimental results show that compared with original NSFF
and other new perspective reconstruction algorithms of dynamic scenes, the
improved NSFF-RFCT improves the reconstruction accuracy and enhances the
understanding ability of dynamic and static scenes.
Related papers
- Learn to Memorize and to Forget: A Continual Learning Perspective of Dynamic SLAM [17.661231232206028]
Simultaneous localization and mapping (SLAM) with implicit neural representations has received extensive attention.
We propose a novel SLAM framework for dynamic environments.
arXiv Detail & Related papers (2024-07-18T09:35:48Z) - D-NPC: Dynamic Neural Point Clouds for Non-Rigid View Synthesis from Monocular Video [53.83936023443193]
This paper contributes to the field by introducing a new synthesis method for dynamic novel view from monocular video, such as smartphone captures.
Our approach represents the as a $textitdynamic neural point cloud$, an implicit time-conditioned point cloud that encodes local geometry and appearance in separate hash-encoded neural feature grids.
arXiv Detail & Related papers (2024-06-14T14:35:44Z) - Enhancing Dynamic CT Image Reconstruction with Neural Fields Through Explicit Motion Regularizers [0.0]
We show the benefits of introducing explicit PDE-based motion regularizers in 2D+time computed tomography for the optimization of neural fields.
We also compare neural fields against a grid-based solver and show that the former outperforms the latter.
arXiv Detail & Related papers (2024-06-03T13:07:29Z) - HFGS: 4D Gaussian Splatting with Emphasis on Spatial and Temporal High-Frequency Components for Endoscopic Scene Reconstruction [13.012536387221669]
Robot-assisted minimally invasive surgery benefits from enhancing dynamic scene reconstruction, as it improves surgical outcomes.
NeRF have been effective in scene reconstruction, but their slow inference speeds and lengthy training durations limit their applicability.
3D Gaussian Splatting (3D-GS) based methods have emerged as a recent trend, offering rapid inference capabilities and superior 3D quality.
In this paper, we propose HFGS, a novel approach for deformable endoscopic reconstruction that addresses these challenges from spatial and temporal frequency perspectives.
arXiv Detail & Related papers (2024-05-28T06:48:02Z) - DynaMoN: Motion-Aware Fast and Robust Camera Localization for Dynamic Neural Radiance Fields [71.94156412354054]
We propose Dynamic Motion-Aware Fast and Robust Camera Localization for Dynamic Neural Radiance Fields (DynaMoN)
DynaMoN handles dynamic content for initial camera pose estimation and statics-focused ray sampling for fast and accurate novel-view synthesis.
We extensively evaluate our approach on two real-world dynamic datasets, the TUM RGB-D dataset and the BONN RGB-D Dynamic dataset.
arXiv Detail & Related papers (2023-09-16T08:46:59Z) - SceNeRFlow: Time-Consistent Reconstruction of General Dynamic Scenes [75.9110646062442]
We propose SceNeRFlow to reconstruct a general, non-rigid scene in a time-consistent manner.
Our method takes multi-view RGB videos and background images from static cameras with known camera parameters as input.
We show experimentally that, unlike prior work that only handles small motion, our method enables the reconstruction of studio-scale motions.
arXiv Detail & Related papers (2023-08-16T09:50:35Z) - Alignment-free HDR Deghosting with Semantics Consistent Transformer [76.91669741684173]
High dynamic range imaging aims to retrieve information from multiple low-dynamic range inputs to generate realistic output.
Existing methods often focus on the spatial misalignment across input frames caused by the foreground and/or camera motion.
We propose a novel alignment-free network with a Semantics Consistent Transformer (SCTNet) with both spatial and channel attention modules.
arXiv Detail & Related papers (2023-05-29T15:03:23Z) - OD-NeRF: Efficient Training of On-the-Fly Dynamic Neural Radiance Fields [63.04781030984006]
Dynamic neural radiance fields (dynamic NeRFs) have demonstrated impressive results in novel view synthesis on 3D dynamic scenes.
We propose OD-NeRF to efficiently train and render dynamic NeRFs on-the-fly which instead is capable of streaming the dynamic scene.
Our algorithm can achieve an interactive speed of 6FPS training and rendering on synthetic dynamic scenes on-the-fly, and a significant speed-up compared to the state-of-the-art on real-world dynamic scenes.
arXiv Detail & Related papers (2023-05-24T07:36:47Z) - Temporally Consistent Online Depth Estimation in Dynamic Scenes [17.186528244457055]
Temporally consistent depth estimation is crucial for real-time applications such as augmented reality.
We present a technique to produce temporally consistent depth estimates in dynamic scenes in an online setting.
Our network augments current per-frame stereo networks with novel motion and fusion networks.
arXiv Detail & Related papers (2021-11-17T19:00:51Z) - Limited-angle tomographic reconstruction of dense layered objects by
dynamical machine learning [68.9515120904028]
Limited-angle tomography of strongly scattering quasi-transparent objects is a challenging, highly ill-posed problem.
Regularizing priors are necessary to reduce artifacts by improving the condition of such problems.
We devised a recurrent neural network (RNN) architecture with a novel split-convolutional gated recurrent unit (SC-GRU) as the building block.
arXiv Detail & Related papers (2020-07-21T11:48:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.