Binomial Self-compensation for Motion Error in Dynamic 3D Scanning
- URL: http://arxiv.org/abs/2404.06693v3
- Date: Sat, 26 Oct 2024 03:19:42 GMT
- Title: Binomial Self-compensation for Motion Error in Dynamic 3D Scanning
- Authors: Geyou Zhang, Ce Zhu, Kai Liu,
- Abstract summary: A fundamental assumption of PSP that the object should remain static is violated in dynamic measurement.
We propose a pixel-wise and frame-wise loopable binomial self-compensation (BSC) algorithm to effectively and flexibly eliminate motion error in the four-step PSP.
Our BSC outperforms the existing methods in reducing motion error, while achieving a depth map frame rate equal to the camera's acquisition rate (90 fps)
- Score: 29.19420538435485
- License:
- Abstract: Phase shifting profilometry (PSP) is favored in high-precision 3D scanning due to its high accuracy, robustness, and pixel-wise property. However, a fundamental assumption of PSP that the object should remain static is violated in dynamic measurement, making PSP susceptible to object moving, resulting in ripple-like errors in the point clouds. We propose a pixel-wise and frame-wise loopable binomial self-compensation (BSC) algorithm to effectively and flexibly eliminate motion error in the four-step PSP. Our mathematical model demonstrates that by summing successive motion-affected phase frames weighted by binomial coefficients, motion error exponentially diminishes as the binomial order increases, accomplishing automatic error compensation through the motion-affected phase sequence, without the assistance of any intermediate variable. Extensive experiments show that our BSC outperforms the existing methods in reducing motion error, while achieving a depth map frame rate equal to the camera's acquisition rate (90 fps), enabling high-accuracy 3D reconstruction with a quasi-single-shot frame rate.
Related papers
- MotionGS: Exploring Explicit Motion Guidance for Deformable 3D Gaussian Splatting [56.785233997533794]
We propose a novel deformable 3D Gaussian splatting framework called MotionGS.
MotionGS explores explicit motion priors to guide the deformation of 3D Gaussians.
Experiments in the monocular dynamic scenes validate that MotionGS surpasses state-of-the-art methods.
arXiv Detail & Related papers (2024-10-10T08:19:47Z) - CRiM-GS: Continuous Rigid Motion-Aware Gaussian Splatting from Motion Blur Images [12.603775893040972]
We propose continuous rigid motion-aware gaussian splatting (CRiM-GS) to reconstruct accurate 3D scene from blurry images with real-time rendering speed.
We leverage rigid body transformations to model the camera motion with proper regularization, preserving the shape and size of the object.
Furthermore, we introduce a continuous deformable 3D transformation in the textitSE(3) field to adapt the rigid body transformation to real-world problems.
arXiv Detail & Related papers (2024-07-04T13:37:04Z) - DeblurGS: Gaussian Splatting for Camera Motion Blur [45.13521168573883]
We propose DeblurGS, a method to optimize sharp 3D Gaussian Splatting from motion-blurred images.
We restore a fine-grained sharp scene by leveraging the remarkable reconstruction capability of 3D Gaussian Splatting.
Our approach estimates the 6-Degree-of-Freedom camera motion for each blurry observation and synthesizes corresponding blurry renderings.
arXiv Detail & Related papers (2024-04-17T13:14:52Z) - Motion-induced error reduction for high-speed dynamic digital fringe
projection system [1.506359725738692]
In phase-shifting profilometry, any motion during the acquisition of fringe patterns can introduce errors.
We propose a method to pixel-wise reduce the errors when the measurement system is in motion due to a motorized linear stage.
arXiv Detail & Related papers (2024-01-29T07:57:43Z) - Pixel-wise Smoothing for Certified Robustness against Camera Motion
Perturbations [45.576866560987405]
We present a framework for certifying the robustness of 3D-2D projective transformations against camera motion perturbations.
Our approach leverages a smoothing distribution over the 2D pixel space instead of in the 3D physical space.
Our approach achieves approximately 80% certified accuracy while utilizing only 30% of the projected image frames.
arXiv Detail & Related papers (2023-09-22T19:15:49Z) - NIKI: Neural Inverse Kinematics with Invertible Neural Networks for 3D
Human Pose and Shape Estimation [53.25973084799954]
We present NIKI (Neural Inverse Kinematics with Invertible Neural Network), which models bi-directional errors.
NIKI can learn from both the forward and inverse processes with invertible networks.
arXiv Detail & Related papers (2023-05-15T12:13:24Z) - Towards Nonlinear-Motion-Aware and Occlusion-Robust Rolling Shutter
Correction [54.00007868515432]
Existing methods face challenges in estimating the accurate correction field due to the uniform velocity assumption.
We propose a geometry-based Quadratic Rolling Shutter (QRS) motion solver, which precisely estimates the high-order correction field of individual pixels.
Our method surpasses the state-of-the-art by +4.98, +0.77, and +4.33 of PSNR on Carla-RS, Fastec-RS, and BS-RSC datasets, respectively.
arXiv Detail & Related papers (2023-03-31T15:09:18Z) - Motion-from-Blur: 3D Shape and Motion Estimation of Motion-blurred
Objects in Videos [115.71874459429381]
We propose a method for jointly estimating the 3D motion, 3D shape, and appearance of highly motion-blurred objects from a video.
Experiments on benchmark datasets demonstrate that our method outperforms previous methods for fast moving object deblurring and 3D reconstruction.
arXiv Detail & Related papers (2021-11-29T11:25:14Z) - Uncertainty-Aware Camera Pose Estimation from Points and Lines [101.03675842534415]
Perspective-n-Point-and-Line (Pn$PL) aims at fast, accurate and robust camera localizations with respect to a 3D model from 2D-3D feature coordinates.
arXiv Detail & Related papers (2021-07-08T15:19:36Z) - Visual Odometry with an Event Camera Using Continuous Ray Warping and
Volumetric Contrast Maximization [31.627936023222052]
We present a new solution to tracking and mapping with an event camera.
The motion of the camera contains both rotation and translation, and the displacements happen in an arbitrarily structured environment.
We introduce a new solution to this problem by performing contrast in 3D.
The practical validity of our approach is supported by an application to AGV motion estimation and 3D reconstruction with a single vehicle-mounted event camera.
arXiv Detail & Related papers (2021-07-07T04:32:57Z) - Spatiotemporal Bundle Adjustment for Dynamic 3D Human Reconstruction in
the Wild [49.672487902268706]
We present a framework that jointly estimates camera temporal alignment and 3D point triangulation.
We reconstruct 3D motion trajectories of human bodies in events captured by multiple unsynchronized and unsynchronized video cameras.
arXiv Detail & Related papers (2020-07-24T23:50:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.