UHRNet: A Deep Learning-Based Method for Accurate 3D Reconstruction from
a Single Fringe-Pattern
- URL: http://arxiv.org/abs/2304.14503v1
- Date: Sun, 23 Apr 2023 08:39:05 GMT
- Title: UHRNet: A Deep Learning-Based Method for Accurate 3D Reconstruction from
a Single Fringe-Pattern
- Authors: Yixiao Wang, Canlin Zhou, Xingyang Qi, Hui Li
- Abstract summary: We propose using a U shaped High resolution Network (UHRNet) to improve method's accuracy.
The network uses UNet encoding and decoding structure as backbone, with Multi-Level convolution Block and High resolution Fusion Block applied.
Our experimental results show that our proposed method can increase the accuracy of 3D reconstruction from a single fringe pattern.
- Score: 3.5401671460123576
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The quick and accurate retrieval of an object height from a single fringe
pattern in Fringe Projection Profilometry has been a topic of ongoing research.
While a single shot fringe to depth CNN based method can restore height map
directly from a single pattern, its accuracy is currently inferior to the
traditional phase shifting technique. To improve this method's accuracy, we
propose using a U shaped High resolution Network (UHRNet). The network uses
UNet encoding and decoding structure as backbone, with Multi-Level convolution
Block and High resolution Fusion Block applied to extract local features and
global features. We also designed a compound loss function by combining
Structural Similarity Index Measure Loss (SSIMLoss) function and chunked L2
loss function to improve 3D reconstruction details.We conducted several
experiments to demonstrate the validity and robustness of our proposed method.
A few experiments have been conducted to demonstrate the validity and
robustness of the proposed method, The average RMSE of 3D reconstruction by our
method is only 0.443(mm). which is 41.13% of the UNet method and 33.31% of Wang
et al hNet method. Our experimental results show that our proposed method can
increase the accuracy of 3D reconstruction from a single fringe pattern.
Related papers
- Neural Poisson Surface Reconstruction: Resolution-Agnostic Shape
Reconstruction from Point Clouds [53.02191521770926]
We introduce Neural Poisson Surface Reconstruction (nPSR), an architecture for shape reconstruction that addresses the challenge of recovering 3D shapes from points.
nPSR exhibits two main advantages: First, it enables efficient training on low-resolution data while achieving comparable performance at high-resolution evaluation.
Overall, the neural Poisson surface reconstruction not only improves upon the limitations of classical deep neural networks in shape reconstruction but also achieves superior results in terms of reconstruction quality, running time, and resolution agnosticism.
arXiv Detail & Related papers (2023-08-03T13:56:07Z) - Cut-and-Approximate: 3D Shape Reconstruction from Planar Cross-sections
with Deep Reinforcement Learning [0.0]
We present to the best of our knowledge the first 3D shape reconstruction network to solve this task.
Our method is based on applying a Reinforcement Learning algorithm to learn how to effectively parse the shape.
arXiv Detail & Related papers (2022-10-22T17:48:12Z) - Improving Point Cloud Based Place Recognition with Ranking-based Loss
and Large Batch Training [1.116812194101501]
The paper presents a simple and effective learning-based method for computing a discriminative 3D point cloud descriptor.
We employ recent advances in image retrieval and propose a modified version of a loss function based on a differentiable average precision approximation.
arXiv Detail & Related papers (2022-03-02T09:29:28Z) - EGFN: Efficient Geometry Feature Network for Fast Stereo 3D Object
Detection [51.52496693690059]
Fast stereo based 3D object detectors lag far behind high-precision oriented methods in accuracy.
We argue that the main reason is the missing or poor 3D geometry feature representation in fast stereo based methods.
The proposed EGFN outperforms YOLOStsereo3D, the advanced fast method, by 5.16% on mAP$_3d$ at the cost of merely additional 12 ms.
arXiv Detail & Related papers (2021-11-28T05:25:36Z) - Learnable Triangulation for Deep Learning-based 3D Reconstruction of
Objects of Arbitrary Topology from Single RGB Images [12.693545159861857]
We propose a novel deep reinforcement learning-based approach for 3D object reconstruction from monocular images.
The proposed method outperforms the state-of-the-art in terms of visual quality, reconstruction accuracy, and computational time.
arXiv Detail & Related papers (2021-09-24T09:44:22Z) - Geometry Uncertainty Projection Network for Monocular 3D Object
Detection [138.24798140338095]
We propose a Geometry Uncertainty Projection Network (GUP Net) to tackle the error amplification problem at both inference and training stages.
Specifically, a GUP module is proposed to obtains the geometry-guided uncertainty of the inferred depth.
At the training stage, we propose a Hierarchical Task Learning strategy to reduce the instability caused by error amplification.
arXiv Detail & Related papers (2021-07-29T06:59:07Z) - 3D Human Pose and Shape Regression with Pyramidal Mesh Alignment
Feedback Loop [128.07841893637337]
Regression-based methods have recently shown promising results in reconstructing human meshes from monocular images.
Minor deviation in parameters may lead to noticeable misalignment between the estimated meshes and image evidences.
We propose a Pyramidal Mesh Alignment Feedback (PyMAF) loop to leverage a feature pyramid and rectify the predicted parameters.
arXiv Detail & Related papers (2021-03-30T17:07:49Z) - Reinforced Axial Refinement Network for Monocular 3D Object Detection [160.34246529816085]
Monocular 3D object detection aims to extract the 3D position and properties of objects from a 2D input image.
Conventional approaches sample 3D bounding boxes from the space and infer the relationship between the target object and each of them, however, the probability of effective samples is relatively small in the 3D space.
We propose to start with an initial prediction and refine it gradually towards the ground truth, with only one 3d parameter changed in each step.
This requires designing a policy which gets a reward after several steps, and thus we adopt reinforcement learning to optimize it.
arXiv Detail & Related papers (2020-08-31T17:10:48Z) - Progressively Guided Alternate Refinement Network for RGB-D Salient
Object Detection [63.18846475183332]
We aim to develop an efficient and compact deep network for RGB-D salient object detection.
We propose a progressively guided alternate refinement network to refine it.
Our model outperforms existing state-of-the-art approaches by a large margin.
arXiv Detail & Related papers (2020-08-17T02:55:06Z) - Ladybird: Quasi-Monte Carlo Sampling for Deep Implicit Field Based 3D
Reconstruction with Symmetry [12.511526058118143]
We propose a sampling scheme that theoretically encourages generalization and results in fast convergence for SGD-based optimization algorithms.
Based on the reflective symmetry of an object, we propose a feature fusion method that alleviates issues due to self-occlusions.
Our proposed system Ladybird is able to create high quality 3D object reconstructions from a single input image.
arXiv Detail & Related papers (2020-07-27T09:17:00Z) - Unstructured Road Vanishing Point Detection Using the Convolutional
Neural Network and Heatmap Regression [3.8170259685864165]
We propose a novel solution combining the convolutional neural network (CNN) and heatmap regression to detect unstructured road VP.
The proposed algorithm firstly adopts a lightweight backbone, i.e., depthwise convolution modified HRNet, to extract hierarchical features of the unstructured road image.
Three advanced strategies, i.e., multi-scale supervised learning, heatmap super-resolution, and coordinate regression techniques are utilized to achieve fast and high-precision unstructured road VP detection.
arXiv Detail & Related papers (2020-06-08T15:44:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.