Self-supervised phase unwrapping in fringe projection profilometry
- URL: http://arxiv.org/abs/2302.06381v3
- Date: Tue, 30 May 2023 06:49:24 GMT
- Title: Self-supervised phase unwrapping in fringe projection profilometry
- Authors: Xiaomin Gao, Wanzhong Song, Chunqian Tan, Junzhe Lei
- Abstract summary: A novel self-supervised phase unwrapping method for single-camera fringe projection profilometry is proposed.
The trained network can retrieve the absolute fringe order from one phase map of 64-period and overperform DF-TPU approaches in terms of depth accuracy.
Experimental results demonstrate the validation of the proposed method on real scenes of motion blur, isolated objects, low reflectivity, and phase discontinuity.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Fast-speed and high-accuracy three-dimensional (3D) shape measurement has
been the goal all along in fringe projection profilometry (FPP). The
dual-frequency temporal phase unwrapping method (DF-TPU) is one of the
prominent technologies to achieve this goal. However, the period number of the
high-frequency pattern of existing DF-TPU approaches is usually limited by the
inevitable phase errors, setting a limit to measurement accuracy.
Deep-learning-based phase unwrapping methods for single-camera FPP usually
require labeled data for training. In this letter, a novel self-supervised
phase unwrapping method for single-camera FPP systems is proposed. The trained
network can retrieve the absolute fringe order from one phase map of 64-period
and overperform DF-TPU approaches in terms of depth accuracy. Experimental
results demonstrate the validation of the proposed method on real scenes of
motion blur, isolated objects, low reflectivity, and phase discontinuity.
Related papers
- PCF-Lift: Panoptic Lifting by Probabilistic Contrastive Fusion [80.79938369319152]
We design a new pipeline coined PCF-Lift based on our Probabilis-tic Contrastive Fusion (PCF)
Our PCF-lift not only significantly outperforms the state-of-the-art methods on widely used benchmarks including the ScanNet dataset and the Messy Room dataset (4.4% improvement of scene-level PQ)
arXiv Detail & Related papers (2024-10-14T16:06:59Z) - Enhanced fringe-to-phase framework using deep learning [2.243491254050456]
We introduce SFNet, a symmetric fusion network that transforms two fringe images into an absolute phase.
To enhance output reliability, Our framework predicts refined phases by incorporating information from fringe images of a different frequency than those used as input.
arXiv Detail & Related papers (2024-02-01T19:47:34Z) - Post-Processing Temporal Action Detection [134.26292288193298]
Temporal Action Detection (TAD) methods typically take a pre-processing step in converting an input varying-length video into a fixed-length snippet representation sequence.
This pre-processing step would temporally downsample the video, reducing the inference resolution and hampering the detection performance in the original temporal resolution.
We introduce a novel model-agnostic post-processing method without model redesign and retraining.
arXiv Detail & Related papers (2022-11-27T19:50:37Z) - Weakly-Supervised Optical Flow Estimation for Time-of-Flight [11.496094830445054]
We propose a training algorithm, which allows to supervise Optical Flow networks directly on the reconstructed depth.
We demonstrate that this approach enables the training of OF networks to align raw iToF measurements and compensate motion artifacts in the iToF depth images.
arXiv Detail & Related papers (2022-10-11T09:47:23Z) - Deep Learning-enabled Spatial Phase Unwrapping for 3D Measurement [7.104399331837426]
Single-camera system projecting single-frequency patterns is the ideal option among all proposed Fringe Projection Profilometry (FPP) systems.
This paper proposes a hybrid method combining deep learning and traditional path-following for robust SPU in FPP.
arXiv Detail & Related papers (2022-08-06T14:19:03Z) - FOF: Learning Fourier Occupancy Field for Monocular Real-time Human
Reconstruction [73.85709132666626]
Existing representations, such as parametric models, voxel grids, meshes and implicit neural representations, have difficulties achieving high-quality results and real-time speed at the same time.
We propose Fourier Occupancy Field (FOF), a novel powerful, efficient and flexible 3D representation, for monocular real-time and accurate human reconstruction.
A FOF can be stored as a multi-channel image, which is compatible with 2D convolutional neural networks and can bridge the gap between 3D and 2D images.
arXiv Detail & Related papers (2022-06-05T14:45:02Z) - Self-Supervised Multi-Frame Monocular Scene Flow [61.588808225321735]
We introduce a multi-frame monocular scene flow network based on self-supervised learning.
We observe state-of-the-art accuracy among monocular scene flow methods based on self-supervised learning.
arXiv Detail & Related papers (2021-05-05T17:49:55Z) - Fully Convolutional Line Parsing [25.80938920093857]
We present a one-stage Fully Convolutional Line Parsing network (F-Clip) that detects line segments from images.
F-Clip detects line segments in an end-to-end fashion by predicting them with each line's center position, length, and angle.
We conduct extensive experiments and show that our method achieves a significantly better trade-off between efficiency and accuracy.
arXiv Detail & Related papers (2021-04-22T17:41:12Z) - Holistically-Attracted Wireframe Parsing [123.58263152571952]
This paper presents a fast and parsimonious parsing method to detect a vectorized wireframe in an input image with a single forward pass.
The proposed method is end-to-end trainable, consisting of three components: (i) line segment and junction proposal generation, (ii) line segment and junction matching, and (iii) line segment and junction verification.
arXiv Detail & Related papers (2020-03-03T17:43:57Z) - 3DSSD: Point-based 3D Single Stage Object Detector [61.67928229961813]
We present a point-based 3D single stage object detector, named 3DSSD, achieving a good balance between accuracy and efficiency.
Our method outperforms all state-of-the-art voxel-based single stage methods by a large margin, and has comparable performance to two stage point-based methods as well.
arXiv Detail & Related papers (2020-02-24T12:01:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.