TRAN-D: 2D Gaussian Splatting-based Sparse-view Transparent Object Depth Reconstruction via Physics Simulation for Scene Update
- URL: http://arxiv.org/abs/2507.11069v2
- Date: Wed, 16 Jul 2025 12:02:03 GMT
- Title: TRAN-D: 2D Gaussian Splatting-based Sparse-view Transparent Object Depth Reconstruction via Physics Simulation for Scene Update
- Authors: Jeongyun Kim, Seunghoon Jeong, Giseop Kim, Myung-Hwan Jeon, Eunji Jun, Ayoung Kim,
- Abstract summary: TRAN-D is a novel 2D Gaussian Splatting-based depth reconstruction method for transparent objects.<n>We mitigate artifacts with an object-aware loss that places Gaussians in obscured regions.<n>We incorporate a physics-based simulation that refines the reconstruction in just a few seconds.
- Score: 14.360210515795904
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Understanding the 3D geometry of transparent objects from RGB images is challenging due to their inherent physical properties, such as reflection and refraction. To address these difficulties, especially in scenarios with sparse views and dynamic environments, we introduce TRAN-D, a novel 2D Gaussian Splatting-based depth reconstruction method for transparent objects. Our key insight lies in separating transparent objects from the background, enabling focused optimization of Gaussians corresponding to the object. We mitigate artifacts with an object-aware loss that places Gaussians in obscured regions, ensuring coverage of invisible surfaces while reducing overfitting. Furthermore, we incorporate a physics-based simulation that refines the reconstruction in just a few seconds, effectively handling object removal and chain-reaction movement of remaining objects without the need for rescanning. TRAN-D is evaluated on both synthetic and real-world sequences, and it consistently demonstrated robust improvements over existing GS-based state-of-the-art methods. In comparison with baselines, TRAN-D reduces the mean absolute error by over 39% for the synthetic TRansPose sequences. Furthermore, despite being updated using only one image, TRAN-D reaches a {\delta} < 2.5 cm accuracy of 48.46%, over 1.5 times that of baselines, which uses six images. Code and more results are available at https://jeongyun0609.github.io/TRAN-D/.
Related papers
- Low-Frequency First: Eliminating Floating Artifacts in 3D Gaussian Splatting [22.626200397052862]
3D Gaussian Splatting (3DGS) is a powerful representation for 3D reconstruction.<n>3DGS often produces floating artifacts, which are erroneous structures detached from the actual geometry.<n>We propose EFA-GS, which selectively expands under-optimized Gaussians to prioritize accurate low-frequency learning.
arXiv Detail & Related papers (2025-08-04T15:03:56Z) - GS-2DGS: Geometrically Supervised 2DGS for Reflective Object Reconstruction [51.99776072246151]
We propose a novel reconstruction method called GS-2DGS for reflective objects based on 2D Gaussian Splatting (2DGS)<n> Experimental results on synthetic and real datasets demonstrate that our method significantly outperforms Gaussian-based techniques in terms of reconstruction and relighting.
arXiv Detail & Related papers (2025-06-16T05:40:16Z) - RobustSplat: Decoupling Densification and Dynamics for Transient-Free 3DGS [79.15416002879239]
3D Gaussian Splatting has gained significant attention for its real-time, photo-realistic rendering in novel-view synthesis and 3D modeling.<n>Existing methods struggle with accurately modeling scenes affected by transient objects, leading to artifacts in the rendered images.<n>We propose RobustSplat, a robust solution based on two critical designs.
arXiv Detail & Related papers (2025-06-03T11:13:48Z) - TransparentGS: Fast Inverse Rendering of Transparent Objects with Gaussians [35.444290579981455]
We propose TransparentGS, a fast inverse rendering pipeline for transparent objects based on 3D-GS.<n>We leverage Gaussian light field probes (GaussProbe) to encode both ambient light and nearby contents in a unified framework.<n> Experiments demonstrate the speed and accuracy of our approach in recovering transparent objects from complex environments.
arXiv Detail & Related papers (2025-04-26T02:15:03Z) - TSGS: Improving Gaussian Splatting for Transparent Surface Reconstruction via Normal and De-lighting Priors [39.60777069381983]
We introduce Transparent Surface Gaussian Splatting (TSGS), a new framework that separates geometry learning from appearance refinement.<n>In the geometry learning stage, TSGS focuses on geometry by using specular-suppressed inputs to accurately represent surfaces.<n>To enhance depth inference, TSGS employs a first-surface depth extraction method.
arXiv Detail & Related papers (2025-04-17T10:00:09Z) - TransDiff: Diffusion-Based Method for Manipulating Transparent Objects Using a Single RGB-D Image [9.242427101416226]
We propose a single-view RGB-D-based depth completion framework, TransDiff, to achieve material-agnostic object grasping in desktop.<n>We leverage features extracted from RGB images, including semantic segmentation, edge maps, and normal maps, to condition the depth map generation process.<n>Our method learns an iterative denoising process that transforms a random depth distribution into a depth map, guided by initially refined depth information.
arXiv Detail & Related papers (2025-03-17T03:29:37Z) - GSGTrack: Gaussian Splatting-Guided Object Pose Tracking from RGB Videos [18.90495041083675]
We introduce GSGTrack, a novel RGB-based pose tracking framework.<n>We propose an object silhouette loss to address the issue of pixel-wise loss being overly sensitive to pose noise during tracking.<n>Experiments on the OnePose and HO3D demonstrate the effectiveness of GSGTrack in both 6DoF pose tracking and object reconstruction.
arXiv Detail & Related papers (2024-12-03T08:38:44Z) - T-3DGS: Removing Transient Objects for 3D Scene Reconstruction [83.05271859398779]
Transient objects in video sequences can significantly degrade the quality of 3D scene reconstructions.<n>We propose T-3DGS, a novel framework that robustly filters out transient distractors during 3D reconstruction using Gaussian Splatting.
arXiv Detail & Related papers (2024-11-29T07:45:24Z) - DeSiRe-GS: 4D Street Gaussians for Static-Dynamic Decomposition and Surface Reconstruction for Urban Driving Scenes [71.61083731844282]
We present DeSiRe-GS, a self-supervised gaussian splatting representation.
It enables effective static-dynamic decomposition and high-fidelity surface reconstruction in complex driving scenarios.
arXiv Detail & Related papers (2024-11-18T05:49:16Z) - CityGaussianV2: Efficient and Geometrically Accurate Reconstruction for Large-Scale Scenes [53.107474952492396]
CityGaussianV2 is a novel approach for large-scale scene reconstruction.<n>We implement a decomposed-gradient-based densification and depth regression technique to eliminate blurry artifacts and accelerate convergence.<n>Our method strikes a promising balance between visual quality, geometric accuracy, as well as storage and training costs.
arXiv Detail & Related papers (2024-11-01T17:59:31Z) - SMORE: Simultaneous Map and Object REconstruction [66.66729715211642]
We present a method for dynamic surface reconstruction of large-scale urban scenes from LiDAR.<n>We take a holistic perspective and optimize a compositional model of a dynamic scene that decomposes the world into rigidly-moving objects and the background.
arXiv Detail & Related papers (2024-06-19T23:53:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.