SST: Real-time End-to-end Monocular 3D Reconstruction via Sparse
Spatial-Temporal Guidance
- URL: http://arxiv.org/abs/2212.06524v2
- Date: Tue, 25 Jul 2023 02:22:16 GMT
- Title: SST: Real-time End-to-end Monocular 3D Reconstruction via Sparse
Spatial-Temporal Guidance
- Authors: Chenyangguang Zhang, Zhiqiang Lou, Yan Di, Federico Tombari and
Xiangyang Ji
- Abstract summary: Real-time monocular 3D reconstruction is a challenging problem that remains unsolved.
We propose an end-to-end 3D reconstruction network SST, which utilizes Sparse estimated points from visual SLAM system.
SST outperforms all state-of-the-art competitors, whilst keeping a high inference speed at 59 FPS.
- Score: 71.3027345302485
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Real-time monocular 3D reconstruction is a challenging problem that remains
unsolved. Although recent end-to-end methods have demonstrated promising
results, tiny structures and geometric boundaries are hardly captured due to
their insufficient supervision neglecting spatial details and oversimplified
feature fusion ignoring temporal cues. To address the problems, we propose an
end-to-end 3D reconstruction network SST, which utilizes Sparse estimated
points from visual SLAM system as additional Spatial guidance and fuses
Temporal features via a novel cross-modal attention mechanism, achieving more
detailed reconstruction results. We propose a Local Spatial-Temporal Fusion
module to exploit more informative spatial-temporal cues from multi-view color
information and sparse priors, as well a Global Spatial-Temporal Fusion module
to refine the local TSDF volumes with the world-frame model from coarse to
fine. Extensive experiments on ScanNet and 7-Scenes demonstrate that SST
outperforms all state-of-the-art competitors, whilst keeping a high inference
speed at 59 FPS, enabling real-world applications with real-time requirements.
Related papers
- Rethinking Spatio-Temporal Transformer for Traffic Prediction:Multi-level Multi-view Augmented Learning Framework [4.773547922851949]
Traffic is a challenging-temporal forecasting problem that involves highly complex semantic correlations.
This paper proposes a Multi-level Multi-view Augmented-temporal Transformer (LVST) for traffic prediction.
arXiv Detail & Related papers (2024-06-17T07:36:57Z) - Enhanced Spatio-Temporal Context for Temporally Consistent Robust 3D
Human Motion Recovery from Monocular Videos [5.258814754543826]
We propose a novel method for temporally consistent motion estimation from a monocular video.
Instead of using generic ResNet-like features, our method uses a body-aware feature representation and an independent per-frame pose.
Our method attains significantly lower acceleration error and outperforms the existing state-of-the-art methods.
arXiv Detail & Related papers (2023-11-20T10:53:59Z) - Nothing Stands Still: A Spatiotemporal Benchmark on 3D Point Cloud
Registration Under Large Geometric and Temporal Change [86.44429778015657]
Building 3D geometric maps of man-made spaces are fundamental computer vision and robotics.
Nothing Stands Still (NSS) benchmark focuses on thetemporal registration of 3D scenes undergoing large spatial and temporal change.
As part of NSS, we introduce a dataset of 3D point clouds recurrently captured in large-scale building indoor environments that are under construction or renovation.
arXiv Detail & Related papers (2023-11-15T20:09:29Z) - GO-SLAM: Global Optimization for Consistent 3D Instant Reconstruction [45.49960166785063]
GO-SLAM is a deep-learning-based dense visual SLAM framework globally optimizing poses and 3D reconstruction in real-time.
Results on various synthetic and real-world datasets demonstrate that GO-SLAM outperforms state-of-the-art approaches at tracking robustness and reconstruction accuracy.
arXiv Detail & Related papers (2023-09-05T17:59:58Z) - Fast Monocular Scene Reconstruction with Global-Sparse Local-Dense Grids [84.90863397388776]
We propose to directly use signed distance function (SDF) in sparse voxel block grids for fast and accurate scene reconstruction without distances.
Our globally sparse and locally dense data structure exploits surfaces' spatial sparsity, enables cache-friendly queries, and allows direct extensions to multi-modal data.
Experiments show that our approach is 10x faster in training and 100x faster in rendering while achieving comparable accuracy to state-of-the-art neural implicit methods.
arXiv Detail & Related papers (2023-05-22T16:50:19Z) - Local-Global Temporal Difference Learning for Satellite Video
Super-Resolution [55.69322525367221]
We propose to exploit the well-defined temporal difference for efficient and effective temporal compensation.
To fully utilize the local and global temporal information within frames, we systematically modeled the short-term and long-term temporal discrepancies.
Rigorous objective and subjective evaluations conducted across five mainstream video satellites demonstrate that our method performs favorably against state-of-the-art approaches.
arXiv Detail & Related papers (2023-04-10T07:04:40Z) - Spatio-temporal Tendency Reasoning for Human Body Pose and Shape
Estimation from Videos [10.50306784245168]
We present atemporal tendency reasoning (STR) network for recovering human body pose shape from videos.
Our STR aims to learn accurate and spatial motion sequences in an unconstrained environment.
Our STR remains competitive with the state-of-the-art on three datasets.
arXiv Detail & Related papers (2022-10-07T16:09:07Z) - Decoupling and Recoupling Spatiotemporal Representation for RGB-D-based
Motion Recognition [62.46544616232238]
Previous motion recognition methods have achieved promising performance through the tightly coupled multi-temporal representation.
We propose to decouple and recouple caused caused representation for RGB-D-based motion recognition.
arXiv Detail & Related papers (2021-12-16T18:59:47Z) - SCFusion: Real-time Incremental Scene Reconstruction with Semantic
Completion [86.77318031029404]
We propose a framework that performs scene reconstruction and semantic scene completion jointly in an incremental and real-time manner.
Our framework relies on a novel neural architecture designed to process occupancy maps and leverages voxel states to accurately and efficiently fuse semantic completion with the 3D global model.
arXiv Detail & Related papers (2020-10-26T15:31:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.