4D-LRM: Large Space-Time Reconstruction Model From and To Any View at Any Time
- URL: http://arxiv.org/abs/2506.18890v1
- Date: Mon, 23 Jun 2025 17:57:47 GMT
- Title: 4D-LRM: Large Space-Time Reconstruction Model From and To Any View at Any Time
- Authors: Ziqiao Ma, Xuweiyi Chen, Shoubin Yu, Sai Bi, Kai Zhang, Chen Ziwen, Sihan Xu, Jianing Yang, Zexiang Xu, Kalyan Sunkavalli, Mohit Bansal, Joyce Chai, Hao Tan,
- Abstract summary: 4D-LRM is the first large-scale 4D reconstruction model that takes input from unconstrained views and timestamps and renders arbitrary view-time combinations.<n>It learns a unified space-time representation and directly predicts per-pixel 4D Gaussian primitives from posed image tokens across time.<n>It reconstructs 24-frame sequences in one forward pass with less than 1.5 seconds on a single A100 GPU.
- Score: 74.07107064085409
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Can we scale 4D pretraining to learn general space-time representations that reconstruct an object from a few views at some times to any view at any time? We provide an affirmative answer with 4D-LRM, the first large-scale 4D reconstruction model that takes input from unconstrained views and timestamps and renders arbitrary novel view-time combinations. Unlike prior 4D approaches, e.g., optimization-based, geometry-based, or generative, that struggle with efficiency, generalization, or faithfulness, 4D-LRM learns a unified space-time representation and directly predicts per-pixel 4D Gaussian primitives from posed image tokens across time, enabling fast, high-quality rendering at, in principle, infinite frame rate. Our results demonstrate that scaling spatiotemporal pretraining enables accurate and efficient 4D reconstruction. We show that 4D-LRM generalizes to novel objects, interpolates across time, and handles diverse camera setups. It reconstructs 24-frame sequences in one forward pass with less than 1.5 seconds on a single A100 GPU.
Related papers
- Streaming 4D Visual Geometry Transformer [63.99937807085461]
We propose a streaming 4D visual geometry transformer to process the input sequence in an online manner.<n>We use temporal causal attention and cache the historical keys and values as implicit memory to enable efficient streaming long-term 4D reconstruction.<n>Experiments on various 4D geometry perception benchmarks demonstrate that our model increases the inference speed in online scenarios.
arXiv Detail & Related papers (2025-07-15T17:59:57Z) - 4Real-Video-V2: Fused View-Time Attention and Feedforward Reconstruction for 4D Scene Generation [66.20991603309054]
We propose the first framework capable of computing a 4D-temporal grid of video frames and 3D Gaussian particles for each time step using a feed-forward architecture.<n>In the first part, we analyze current 4D video diffusion architectures that perform spatial and temporal attention either sequentially or in parallel within a two-stream design.<n>In the second part, we extend existing 3D reconstruction algorithms by introducing a Gaussian head, a camera token replacement algorithm, and additional dynamic layers and training.
arXiv Detail & Related papers (2025-06-18T23:44:59Z) - Free4D: Tuning-free 4D Scene Generation with Spatial-Temporal Consistency [49.875459658889355]
Free4D is a tuning-free framework for 4D scene generation from a single image.<n>Our key insight is to distill pre-trained foundation models for consistent 4D scene representation.<n>The resulting 4D representation enables real-time, controllable rendering.
arXiv Detail & Related papers (2025-03-26T17:59:44Z) - WideRange4D: Enabling High-Quality 4D Reconstruction with Wide-Range Movements and Scenes [65.76371201992654]
We propose a novel 4D reconstruction benchmark, WideRange4D.<n>This benchmark includes rich 4D scene data with large spatial variations, allowing for a more comprehensive evaluation of the generation capabilities of 4D generation methods.<n>We also introduce a new 4D reconstruction method, Progress4D, which generates stable and high-quality 4D results across various complex 4D scene reconstruction tasks.
arXiv Detail & Related papers (2025-03-17T17:58:18Z) - 4D Gaussian Splatting: Modeling Dynamic Scenes with Native 4D Primitives [116.2042238179433]
In this paper, we frame dynamic scenes as unconstrained 4D volume learning problems.<n>We represent a target dynamic scene using a collection of 4D Gaussian primitives with explicit geometry and appearance features.<n>This approach can capture relevant information in space and time by fitting the underlying photorealistic-temporal volume.<n> Notably, our 4DGS model is the first solution that supports real-time rendering of high-resolution, novel views for complex dynamic scenes.
arXiv Detail & Related papers (2024-12-30T05:30:26Z) - Controlling Space and Time with Diffusion Models [34.7002868116714]
We present 4DiM, a cascaded diffusion model for 4D novel view synthesis (NVS)<n>We enable training on a mixture of 3D (with camera pose), 4D (pose+time) and video (time but no pose) data.<n>4DiM is the first-ever NVS method with intuitive metric-scale camera pose control.
arXiv Detail & Related papers (2024-07-10T17:23:33Z) - Efficient4D: Fast Dynamic 3D Object Generation from a Single-view Video [42.10482273572879]
We propose an efficient video-to-4D object generation framework called Efficient4D.<n>It generates high-quality spacetime-consistent images under different camera views, and then uses them as labeled data.<n>Experiments on both synthetic and real videos show that Efficient4D offers a remarkable 10-fold increase in speed.
arXiv Detail & Related papers (2024-01-16T18:58:36Z) - Tensor4D : Efficient Neural 4D Decomposition for High-fidelity Dynamic
Reconstruction and Rendering [31.928844354349117]
We propose an efficient 4D tensor decomposition method for dynamic scenes.
We show that our method is able to achieve high-quality dynamic reconstruction and rendering from sparse-view camera or even a monocular camera.
The code and dataset will be released atliuyebin.com/tensor4d-tensor4d.html.
arXiv Detail & Related papers (2022-11-21T16:04:45Z) - NeRFPlayer: A Streamable Dynamic Scene Representation with Decomposed
Neural Radiance Fields [99.57774680640581]
We present an efficient framework capable of fast reconstruction, compact modeling, and streamable rendering.
We propose to decompose the 4D space according to temporal characteristics. Points in the 4D space are associated with probabilities belonging to three categories: static, deforming, and new areas.
arXiv Detail & Related papers (2022-10-28T07:11:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.