SST: Real-time End-to-end Monocular 3D Reconstruction via Sparse
Spatial-Temporal Guidance
- URL: http://arxiv.org/abs/2212.06524v2
- Date: Tue, 25 Jul 2023 02:22:16 GMT
- Title: SST: Real-time End-to-end Monocular 3D Reconstruction via Sparse
Spatial-Temporal Guidance
- Authors: Chenyangguang Zhang, Zhiqiang Lou, Yan Di, Federico Tombari and
Xiangyang Ji
- Abstract summary: Real-time monocular 3D reconstruction is a challenging problem that remains unsolved.
We propose an end-to-end 3D reconstruction network SST, which utilizes Sparse estimated points from visual SLAM system.
SST outperforms all state-of-the-art competitors, whilst keeping a high inference speed at 59 FPS.
- Score: 71.3027345302485
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Real-time monocular 3D reconstruction is a challenging problem that remains
unsolved. Although recent end-to-end methods have demonstrated promising
results, tiny structures and geometric boundaries are hardly captured due to
their insufficient supervision neglecting spatial details and oversimplified
feature fusion ignoring temporal cues. To address the problems, we propose an
end-to-end 3D reconstruction network SST, which utilizes Sparse estimated
points from visual SLAM system as additional Spatial guidance and fuses
Temporal features via a novel cross-modal attention mechanism, achieving more
detailed reconstruction results. We propose a Local Spatial-Temporal Fusion
module to exploit more informative spatial-temporal cues from multi-view color
information and sparse priors, as well a Global Spatial-Temporal Fusion module
to refine the local TSDF volumes with the world-frame model from coarse to
fine. Extensive experiments on ScanNet and 7-Scenes demonstrate that SST
outperforms all state-of-the-art competitors, whilst keeping a high inference
speed at 59 FPS, enabling real-world applications with real-time requirements.
Related papers
- Fast-SAM3D: 3Dfy Anything in Images but Faster [65.17322167628367]
SAM3D enables scalable, open-world 3D reconstruction from complex scenes, yet its deployment is hindered by prohibitive inference latency.<n>We present textbfFast-SAM3D, a training-free framework that aligns computation with instantaneous generation complexity.
arXiv Detail & Related papers (2026-02-05T04:27:59Z) - Online Segment Any 3D Thing as Instance Tracking [60.20416622842975]
We reconceptualize online 3D segmentation as an instance tracking problem (AutoSeg3D)<n>We introduce spatial consistency learning to mitigate the fragmentation problem inherent in Vision Foundation Models.<n>Our method establishes a new state-of-the-art, surpassing ESAM by 2.8 AP on ScanNet200.
arXiv Detail & Related papers (2025-12-08T14:48:51Z) - ST-GS: Vision-Based 3D Semantic Occupancy Prediction with Spatial-Temporal Gaussian Splatting [21.87807066521776]
3D occupancy prediction is critical for comprehensive scene understanding in vision-centric autonomous driving.<n>Recent advances have explored utilizing 3D semantic Gaussians to model occupancy while reducing computational overhead.<n>We propose a novel Spatial-Temporal Gaussian Splatting (ST-GS) framework to enhance both spatial and temporal modeling.
arXiv Detail & Related papers (2025-09-20T06:36:30Z) - DVLO4D: Deep Visual-Lidar Odometry with Sparse Spatial-temporal Fusion [28.146811420532455]
We introduce DVLO4D, a novel visual-LiDAR odometry framework that leverages sparse spatial-temporal fusion to enhance accuracy and robustness.<n>Our method has high efficiency, with an inference time of 82 ms, possessing the potential for the real-time deployment.
arXiv Detail & Related papers (2025-09-07T11:43:11Z) - UST-SSM: Unified Spatio-Temporal State Space Models for Point Cloud Video Modeling [53.199942923818206]
Point cloud videos capture 3D motion while reducing the effects of lighting and viewpoint variations, making them highly effective for recognizing subtle and continuous human actions.<n> Selective State Space Models (SSMs) have shown good performance in sequence modeling with linear complexity.<n>We propose the Unified Spatio-Temporal State Space Model (UST-SSM), which extends the latest advancements in SSMs to point cloud videos.
arXiv Detail & Related papers (2025-08-20T10:46:01Z) - STDR: Spatio-Temporal Decoupling for Real-Time Dynamic Scene Rendering [15.873329633980015]
Existing 3DGS-based methods for dynamic reconstruction often suffer from textbfSTDR (Spatio-coupling DeTemporal for Real-time rendering)<n>We propose textbfSTDR (Spatio-coupling DeTemporal for Real-time rendering), a plug-and-play module learns thattemporal probability distributions for each scene.
arXiv Detail & Related papers (2025-05-28T14:26:41Z) - Breaking Down Monocular Ambiguity: Exploiting Temporal Evolution for 3D Lane Detection [79.98605061363999]
Monocular 3D lane detection aims to estimate the 3D position of lanes from frontal-view (FV) images.<n>Existing methods are constrained by the inherent ambiguity of single-frame input.<n>We propose to unlock the rich information embedded in the temporal evolution of the scene as the vehicle moves.
arXiv Detail & Related papers (2025-04-29T08:10:17Z) - Rethinking Temporal Fusion with a Unified Gradient Descent View for 3D Semantic Occupancy Prediction [62.69089767730514]
We present GDFusion, a temporal fusion method for vision-based 3D semantic occupancy prediction (VisionOcc)
It opens up the underexplored aspects of temporal fusion within the VisionOcc framework, focusing on both temporal cues and fusion strategies.
arXiv Detail & Related papers (2025-04-17T14:05:33Z) - Semantic-Supervised Spatial-Temporal Fusion for LiDAR-based 3D Object Detection [22.890432295751086]
LiDAR-based 3D object detection presents significant challenges due to the inherent sparsity of LiDAR points.
We propose a novel fusion module to relieve the spatial misalignment caused by the object motion over time.
We also propose a Semantic Injection method to enrich the sparse LiDAR data via injecting the point-wise semantic labels.
arXiv Detail & Related papers (2025-03-13T17:30:20Z) - A Staged Deep Learning Approach to Spatial Refinement in 3D Temporal Atmospheric Transport [0.0]
We introduce the Dual-Stage Temporal Three-dimensional Super-resolution (DST3D-UNet-SR) model for plume dispersion prediction.
It is composed of two sequential modules: the temporal module (TM), which predicts the transient evolution of a plume in complex terrain from low-resolution temporal data, and the spatial refinement module (SRM), which subsequently enhances the spatial resolution of the predictions.
arXiv Detail & Related papers (2024-12-14T19:43:48Z) - MambaDETR: Query-based Temporal Modeling using State Space Model for Multi-View 3D Object Detection [18.13821223763173]
We propose a novel method called MambaDETR, whose main idea is to implement temporal fusion in the efficient state space.
On the standard nuScenes benchmark, our proposed MambaDETR achieves remarkable result in the 3D object detection task.
arXiv Detail & Related papers (2024-11-20T14:47:18Z) - Enhanced Spatio-Temporal Context for Temporally Consistent Robust 3D
Human Motion Recovery from Monocular Videos [5.258814754543826]
We propose a novel method for temporally consistent motion estimation from a monocular video.
Instead of using generic ResNet-like features, our method uses a body-aware feature representation and an independent per-frame pose.
Our method attains significantly lower acceleration error and outperforms the existing state-of-the-art methods.
arXiv Detail & Related papers (2023-11-20T10:53:59Z) - Nothing Stands Still: A Spatiotemporal Benchmark on 3D Point Cloud
Registration Under Large Geometric and Temporal Change [86.44429778015657]
Building 3D geometric maps of man-made spaces are fundamental computer vision and robotics.
Nothing Stands Still (NSS) benchmark focuses on thetemporal registration of 3D scenes undergoing large spatial and temporal change.
As part of NSS, we introduce a dataset of 3D point clouds recurrently captured in large-scale building indoor environments that are under construction or renovation.
arXiv Detail & Related papers (2023-11-15T20:09:29Z) - GO-SLAM: Global Optimization for Consistent 3D Instant Reconstruction [45.49960166785063]
GO-SLAM is a deep-learning-based dense visual SLAM framework globally optimizing poses and 3D reconstruction in real-time.
Results on various synthetic and real-world datasets demonstrate that GO-SLAM outperforms state-of-the-art approaches at tracking robustness and reconstruction accuracy.
arXiv Detail & Related papers (2023-09-05T17:59:58Z) - Fast Monocular Scene Reconstruction with Global-Sparse Local-Dense Grids [84.90863397388776]
We propose to directly use signed distance function (SDF) in sparse voxel block grids for fast and accurate scene reconstruction without distances.
Our globally sparse and locally dense data structure exploits surfaces' spatial sparsity, enables cache-friendly queries, and allows direct extensions to multi-modal data.
Experiments show that our approach is 10x faster in training and 100x faster in rendering while achieving comparable accuracy to state-of-the-art neural implicit methods.
arXiv Detail & Related papers (2023-05-22T16:50:19Z) - Local-Global Temporal Difference Learning for Satellite Video
Super-Resolution [55.69322525367221]
We propose to exploit the well-defined temporal difference for efficient and effective temporal compensation.
To fully utilize the local and global temporal information within frames, we systematically modeled the short-term and long-term temporal discrepancies.
Rigorous objective and subjective evaluations conducted across five mainstream video satellites demonstrate that our method performs favorably against state-of-the-art approaches.
arXiv Detail & Related papers (2023-04-10T07:04:40Z) - Spatio-temporal Tendency Reasoning for Human Body Pose and Shape
Estimation from Videos [10.50306784245168]
We present atemporal tendency reasoning (STR) network for recovering human body pose shape from videos.
Our STR aims to learn accurate and spatial motion sequences in an unconstrained environment.
Our STR remains competitive with the state-of-the-art on three datasets.
arXiv Detail & Related papers (2022-10-07T16:09:07Z) - Decoupling and Recoupling Spatiotemporal Representation for RGB-D-based
Motion Recognition [62.46544616232238]
Previous motion recognition methods have achieved promising performance through the tightly coupled multi-temporal representation.
We propose to decouple and recouple caused caused representation for RGB-D-based motion recognition.
arXiv Detail & Related papers (2021-12-16T18:59:47Z) - SCFusion: Real-time Incremental Scene Reconstruction with Semantic
Completion [86.77318031029404]
We propose a framework that performs scene reconstruction and semantic scene completion jointly in an incremental and real-time manner.
Our framework relies on a novel neural architecture designed to process occupancy maps and leverages voxel states to accurately and efficiently fuse semantic completion with the 3D global model.
arXiv Detail & Related papers (2020-10-26T15:31:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.