CasualHDRSplat: Robust High Dynamic Range 3D Gaussian Splatting from Casually Captured Videos
- URL: http://arxiv.org/abs/2504.17728v1
- Date: Thu, 24 Apr 2025 16:42:37 GMT
- Title: CasualHDRSplat: Robust High Dynamic Range 3D Gaussian Splatting from Casually Captured Videos
- Authors: Shucheng Gong, Lingzhe Zhao, Wenpu Li, Hong Xie, Yin Zhang, Shiyu Zhao, Peidong Liu,
- Abstract summary: Photo-realistic novel view rendering from multi-view images, such as neural radiance field (NeRF) and 3D Splatting (3DGS), have garnered widespread attention due to their superior performance.<n>textbfSplat contains a unified differentiable physical imaging model which applies continuous-time trajectory constraint to imaging process.<n>Experiments demonstrate that our approach outperforms existing methods in terms of robustness and quality.
- Score: 15.52886867095313
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, photo-realistic novel view synthesis from multi-view images, such as neural radiance field (NeRF) and 3D Gaussian Splatting (3DGS), have garnered widespread attention due to their superior performance. However, most works rely on low dynamic range (LDR) images, which limits the capturing of richer scene details. Some prior works have focused on high dynamic range (HDR) scene reconstruction, typically require capturing of multi-view sharp images with different exposure times at fixed camera positions during exposure times, which is time-consuming and challenging in practice. For a more flexible data acquisition, we propose a one-stage method: \textbf{CasualHDRSplat} to easily and robustly reconstruct the 3D HDR scene from casually captured videos with auto-exposure enabled, even in the presence of severe motion blur and varying unknown exposure time. \textbf{CasualHDRSplat} contains a unified differentiable physical imaging model which first applies continuous-time trajectory constraint to imaging process so that we can jointly optimize exposure time, camera response function (CRF), camera poses, and sharp 3D HDR scene. Extensive experiments demonstrate that our approach outperforms existing methods in terms of robustness and rendering quality. Our source code will be available at https://github.com/WU-CVGL/CasualHDRSplat
Related papers
- Deblur4DGS: 4D Gaussian Splatting from Blurry Monocular Video [64.38566659338751]
We propose the first 4D Gaussian Splatting framework to reconstruct a high-quality 4D model from blurry monocular video, named Deblur4DGS.<n>We introduce exposure regularization to avoid trivial solutions, as well as multi-frame and multi-resolution consistency ones to alleviate artifacts. Beyond novel-view, Deblur4DGS can be applied to improve blurry video from multiple perspectives, including deblurring, frame synthesis, and video stabilization.
arXiv Detail & Related papers (2024-12-09T12:02:11Z) - EF-3DGS: Event-Aided Free-Trajectory 3D Gaussian Splatting [72.60992807941885]
Event cameras, inspired by biological vision, record pixel-wise intensity changes asynchronously with high temporal resolution.
We propose Event-Aided Free-Trajectory 3DGS, which seamlessly integrates the advantages of event cameras into 3DGS.
We evaluate our method on the public Tanks and Temples benchmark and a newly collected real-world dataset, RealEv-DAVIS.
arXiv Detail & Related papers (2024-10-20T13:44:24Z) - HDRSplat: Gaussian Splatting for High Dynamic Range 3D Scene Reconstruction from Raw Images [14.332077246864628]
3D Gaussian Splatting (3DGS) has revolutionized the 3D scene reconstruction space enabling high-fidelity novel view in real-time.
However, prior 3DGS and NeRF-based methods rely on 8-bit tone-mapped Low Dynamic Range images for scene reconstruction.
Our proposed method HDRSplat tailors 3DGS to train directly on 14-bit linear raw images in near darkness which preserves the scenes' full dynamic range and content.
arXiv Detail & Related papers (2024-07-23T14:21:00Z) - Cinematic Gaussians: Real-Time HDR Radiance Fields with Depth of Field [23.92087253022495]
Radiance field methods represent the state of the art in reconstructing complex scenes from multi-view photos.
Their reliance on a pinhole camera model, assuming all scene elements are in focus in the input images, presents practical challenges and complicates refocusing during novel-view synthesis.
We present a lightweight analytical based on 3D Gaussian Splatting that utilizes multi-view LDR images on varying exposure times, radiance of apertures, and focus distances as input to reconstruct a high-dynamic-range scene.
arXiv Detail & Related papers (2024-06-11T15:00:24Z) - EvaGaussians: Event Stream Assisted Gaussian Splatting from Blurry Images [36.91327728871551]
3D Gaussian Splatting (3D-GS) has demonstrated exceptional capabilities in 3D scene reconstruction and novel view synthesis.<n>We introduce Event Stream Assisted Gaussian Splatting (EvaGaussians), a novel approach that integrates event streams captured by an event camera to assist in reconstructing high-quality 3D-GS from blurry images.
arXiv Detail & Related papers (2024-05-29T04:59:27Z) - HDR-GS: Efficient High Dynamic Range Novel View Synthesis at 1000x Speed via Gaussian Splatting [76.5908492298286]
Existing HDR NVS methods are mainly based on NeRF.
They suffer from long training time and slow inference speed.
We propose a new framework, High Dynamic Range Gaussian Splatting (-GS)
arXiv Detail & Related papers (2024-05-24T00:46:58Z) - Fast High Dynamic Range Radiance Fields for Dynamic Scenes [39.3304365600248]
We propose a dynamic HDR NeRF framework, named HDR-HexPlane, which can learn 3D scenes from dynamic 2D images captured with various exposures.
With the proposed model, high-quality novel-view images at any time point can be rendered with any desired exposure.
arXiv Detail & Related papers (2024-01-11T17:15:16Z) - Self-Supervised High Dynamic Range Imaging with Multi-Exposure Images in
Dynamic Scenes [58.66427721308464]
Self is a self-supervised reconstruction method that only requires dynamic multi-exposure images during training.
Self achieves superior results against the state-of-the-art self-supervised methods, and comparable performance to supervised ones.
arXiv Detail & Related papers (2023-10-03T07:10:49Z) - Online Overexposed Pixels Hallucination in Videos with Adaptive
Reference Frame Selection [90.35085487641773]
Low dynamic range (LDR) cameras cannot deal with wide dynamic range inputs, frequently leading to local overexposure issues.
We present a learning-based system to reduce these artifacts without resorting to complex processing mechanisms.
arXiv Detail & Related papers (2023-08-29T17:40:57Z) - Shakes on a Plane: Unsupervised Depth Estimation from Unstabilized
Photography [54.36608424943729]
We show that in a ''long-burst'', forty-two 12-megapixel RAW frames captured in a two-second sequence, there is enough parallax information from natural hand tremor alone to recover high-quality scene depth.
We devise a test-time optimization approach that fits a neural RGB-D representation to long-burst data and simultaneously estimates scene depth and camera motion.
arXiv Detail & Related papers (2022-12-22T18:54:34Z) - Self-supervised HDR Imaging from Motion and Exposure Cues [14.57046548797279]
We propose a novel self-supervised approach for learnable HDR estimation that alleviates the need for HDR ground-truth labels.
Experimental results show that the HDR models trained using our proposed self-supervision approach achieve performance competitive with those trained under full supervision.
arXiv Detail & Related papers (2022-03-23T10:22:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.