ROSEFusion: Random Optimization for Online Dense Reconstruction under
Fast Camera Motion
- URL: http://arxiv.org/abs/2105.05600v1
- Date: Wed, 12 May 2021 11:37:34 GMT
- Title: ROSEFusion: Random Optimization for Online Dense Reconstruction under
Fast Camera Motion
- Authors: Jiazhao Zhang, Chenyang Zhu, Lintao Zheng, Kai Xu
- Abstract summary: Reconstruction based on RGB-D sequences has thus far been restrained to relatively slow camera motions (1m/s)
Fast motion brings two challenges to depth fusion: 1) the high nonlinearity of camera pose optimization due to large inter-frame rotations and 2) the lack of reliably trackable features due to motion blur.
We propose to tackle the difficulties of fast-motion camera tracking in the absence of inertial measurements using random optimization.
Thanks to the efficient template-based particle set evolution and the effective fitness function, our method attains good quality pose tracking under fast camera motion (up to 4m/s) in a
- Score: 15.873973449155313
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Online reconstruction based on RGB-D sequences has thus far been restrained
to relatively slow camera motions (<1m/s). Under very fast camera motion (e.g.,
3m/s), the reconstruction can easily crumble even for the state-of-the-art
methods. Fast motion brings two challenges to depth fusion: 1) the high
nonlinearity of camera pose optimization due to large inter-frame rotations and
2) the lack of reliably trackable features due to motion blur. We propose to
tackle the difficulties of fast-motion camera tracking in the absence of
inertial measurements using random optimization, in particular, the Particle
Filter Optimization (PFO). To surmount the computation-intensive particle
sampling and update in standard PFO, we propose to accelerate the randomized
search via updating a particle swarm template (PST). PST is a set of particles
pre-sampled uniformly within the unit sphere in the 6D space of camera pose.
Through moving and rescaling the pre-sampled PST guided by swarm intelligence,
our method is able to drive tens of thousands of particles to locate and cover
a good local optimum extremely fast and robustly. The particles, representing
candidate poses, are evaluated with a fitness function defined based on
depth-model conformance. Therefore, our method, being depth-only and
correspondence-free, mitigates the motion blur impediment as ToF-based depths
are often resilient to motion blur. Thanks to the efficient template-based
particle set evolution and the effective fitness function, our method attains
good quality pose tracking under fast camera motion (up to 4m/s) in a realtime
framerate without including loop closure or global pose optimization. Through
extensive evaluations on public datasets of RGB-D sequences, especially on a
newly proposed benchmark of fast camera motion, we demonstrate the significant
advantage of our method over the state of the arts.
Related papers
- CRiM-GS: Continuous Rigid Motion-Aware Gaussian Splatting from Motion Blur Images [12.603775893040972]
We propose continuous rigid motion-aware gaussian splatting (CRiM-GS) to reconstruct accurate 3D scene from blurry images with real-time rendering speed.
We leverage rigid body transformations to model the camera motion with proper regularization, preserving the shape and size of the object.
Furthermore, we introduce a continuous deformable 3D transformation in the textitSE(3) field to adapt the rigid body transformation to real-world problems.
arXiv Detail & Related papers (2024-07-04T13:37:04Z) - Motion-adaptive Separable Collaborative Filters for Blind Motion Deblurring [71.60457491155451]
Eliminating image blur produced by various kinds of motion has been a challenging problem.
We propose a novel real-world deblurring filtering model called the Motion-adaptive Separable Collaborative Filter.
Our method provides an effective solution for real-world motion blur removal and achieves state-of-the-art performance.
arXiv Detail & Related papers (2024-04-19T19:44:24Z) - Gaussian Splatting on the Move: Blur and Rolling Shutter Compensation for Natural Camera Motion [25.54868552979793]
We present a method that adapts to camera motion and allows high-quality scene reconstruction with handheld video data.
Our results with both synthetic and real data demonstrate superior performance in mitigating camera motion over existing methods.
arXiv Detail & Related papers (2024-03-20T06:19:41Z) - ExBluRF: Efficient Radiance Fields for Extreme Motion Blurred Images [58.24910105459957]
We present ExBluRF, a novel view synthesis method for extreme motion blurred images.
Our approach consists of two main components: 6-DOF camera trajectory-based motion blur formulation and voxel-based radiance fields.
Compared with the existing works, our approach restores much sharper 3D scenes with the order of 10 times less training time and GPU memory consumption.
arXiv Detail & Related papers (2023-09-16T11:17:25Z) - SceNeRFlow: Time-Consistent Reconstruction of General Dynamic Scenes [75.9110646062442]
We propose SceNeRFlow to reconstruct a general, non-rigid scene in a time-consistent manner.
Our method takes multi-view RGB videos and background images from static cameras with known camera parameters as input.
We show experimentally that, unlike prior work that only handles small motion, our method enables the reconstruction of studio-scale motions.
arXiv Detail & Related papers (2023-08-16T09:50:35Z) - Minimum Latency Deep Online Video Stabilization [77.68990069996939]
We present a novel camera path optimization framework for the task of online video stabilization.
In this work, we adopt recent off-the-shelf high-quality deep motion models for motion estimation to recover the camera trajectory.
Our approach significantly outperforms state-of-the-art online methods both qualitatively and quantitatively.
arXiv Detail & Related papers (2022-12-05T07:37:32Z) - Parallel Inversion of Neural Radiance Fields for Robust Pose Estimation [26.987638406423123]
We present a parallelized optimization method based on fast Neural Radiance Fields (NeRF) for estimating 6-DoF target poses.
We can predict the translation and rotation of the camera by minimizing the residual between pixels rendered from a fast NeRF model and pixels in the observed image.
Experiments demonstrate that our method can achieve improved generalization and robustness on both synthetic and real-world benchmarks.
arXiv Detail & Related papers (2022-10-18T19:09:58Z) - Globally-Optimal Event Camera Motion Estimation [30.79931004393174]
Event cameras are bio-inspired sensors that perform well in HDR conditions and have high temporal resolution.
Event cameras measure asynchronous pixel-level changes and return them in a highly discretised format.
arXiv Detail & Related papers (2022-03-08T08:24:22Z) - Motion-from-Blur: 3D Shape and Motion Estimation of Motion-blurred
Objects in Videos [115.71874459429381]
We propose a method for jointly estimating the 3D motion, 3D shape, and appearance of highly motion-blurred objects from a video.
Experiments on benchmark datasets demonstrate that our method outperforms previous methods for fast moving object deblurring and 3D reconstruction.
arXiv Detail & Related papers (2021-11-29T11:25:14Z) - Spatiotemporal Bundle Adjustment for Dynamic 3D Human Reconstruction in
the Wild [49.672487902268706]
We present a framework that jointly estimates camera temporal alignment and 3D point triangulation.
We reconstruct 3D motion trajectories of human bodies in events captured by multiple unsynchronized and unsynchronized video cameras.
arXiv Detail & Related papers (2020-07-24T23:50:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.