Animated 3DGS Avatars in Diverse Scenes with Consistent Lighting and Shadows
- URL: http://arxiv.org/abs/2601.01660v1
- Date: Sun, 04 Jan 2026 20:42:06 GMT
- Title: Animated 3DGS Avatars in Diverse Scenes with Consistent Lighting and Shadows
- Authors: Aymen Mir, Riza Alp Guler, Jian Wang, Gerard Pons-Moll, Bing Zhou,
- Abstract summary: We present a method for consistent lighting and shadows when animated 3D Gaussian Splatting (3DGS) avatars interact with 3DGS scenes or with dynamic objects inserted into otherwise static scenes.<n>Key contribution is Deep Gaussian Shadow Maps (DGSM), a modern analogue of the classical shadow mapping algorithm tailored to the volumetric 3DGS representation.<n>We demonstrate environment consistent lighting for avatars from AvatarX and ActorsHQ, composited into ScanNet++, DL3DV, and SuperSplat scenes, and show interactions with inserted objects.
- Score: 23.490603624391095
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We present a method for consistent lighting and shadows when animated 3D Gaussian Splatting (3DGS) avatars interact with 3DGS scenes or with dynamic objects inserted into otherwise static scenes. Our key contribution is Deep Gaussian Shadow Maps (DGSM), a modern analogue of the classical shadow mapping algorithm tailored to the volumetric 3DGS representation. Building on the classic deep shadow mapping idea, we show that 3DGS admits closed form light accumulation along light rays, enabling volumetric shadow computation without meshing. For each estimated light, we tabulate transmittance over concentric radial shells and store them in octahedral atlases, which modern GPUs can sample in real time per query to attenuate affected scene Gaussians and thus cast and receive shadows consistently. To relight moving avatars, we approximate the local environment illumination with HDRI probes represented in a spherical harmonic (SH) basis and apply a fast per Gaussian radiance transfer, avoiding explicit BRDF estimation or offline optimization. We demonstrate environment consistent lighting for avatars from AvatarX and ActorsHQ, composited into ScanNet++, DL3DV, and SuperSplat scenes, and show interactions with inserted objects. Across single and multi avatar settings, DGSM and SH relighting operate fully in the volumetric 3DGS representation, yielding coherent shadows and relighting while avoiding meshing.
Related papers
- Joint Shadow Generation and Relighting via Light-Geometry Interaction Maps [51.82696819319878]
We propose Light-Geometry Interaction maps, a novel representation that encodes light-aware occlusion from monocular depth.<n>LGI captures essential light-shadow interactions reliably and accurately, computed from off-the-shelf 2.5D depth map predictions.<n>By embedding LGI into a bridge-matching generative backbone, we reduce ambiguity and enforce physically consistent light-shadow reasoning.
arXiv Detail & Related papers (2026-02-25T11:47:26Z) - ShadowGS: Shadow-Aware 3D Gaussian Splatting for Satellite Imagery [7.33738775121714]
We propose ShadowGS, a novel framework based on 3DGS.<n>It precisely model geometrically consistent shadows while maintaining efficient rendering.<n>It exhibits robust performance across various settings, including RGB, pansharpened, and sparse-view satellite inputs.
arXiv Detail & Related papers (2026-01-04T06:33:59Z) - ComGS: Efficient 3D Object-Scene Composition via Surface Octahedral Probes [46.83857963152283]
Gaussian Splatting (GS) enables immersive rendering, but realistic 3D object-scene composition remains challenging.<n>We propose ComGS, a novel 3D object-scene composition framework.<n>Our method achieves high-quality, real-time rendering at around 28 FPS, produces visually harmonious results with vivid shadows, and requires only 36 seconds for editing.
arXiv Detail & Related papers (2025-10-09T03:10:41Z) - REdiSplats: Ray Tracing for Editable Gaussian Splatting [0.0]
We introduce REdiSplats, which employs ray tracing and a mesh-based representation of flat 3D Gaussians.<n>In practice, we model the scene using flat Gaussian distributions parameterized by the mesh.<n>We can render our models using 3D tools such as Blender or Nvdiffrast, which opens the possibility of integrating them with all existing 3D graphics techniques.
arXiv Detail & Related papers (2025-03-15T22:42:21Z) - 3D Convex Splatting: Radiance Field Rendering with 3D Smooth Convexes [87.01284850604495]
We introduce 3D Convexting (3DCS), which leverages 3D smooth convexes as primitives for modeling geometrically-meaningful radiance fields from multiview images.<n>3DCS achieves superior performance over 3DGS on benchmarks such as MipNeizer, Tanks and Temples, and Deep Blending.<n>Our results highlight the potential of 3D Convexting to become the new standard for high-quality scene reconstruction.
arXiv Detail & Related papers (2024-11-22T14:31:39Z) - RNG: Relightable Neural Gaussians [19.197099019727826]
We propose a novel 3DGS-based framework that enables the relighting of objects with both hard surfaces or soft boundaries.<n>We also introduce a shadow cue, as well as a depth refinement network to improve shadow accuracy.<n>Our method achieves significantly faster training (1.3 hours) and rendering (60 frames per second) compared to a prior method.
arXiv Detail & Related papers (2024-09-29T13:32:24Z) - LumiGauss: Relightable Gaussian Splatting in the Wild [15.11759492990967]
We introduce LumiGauss - a technique that tackles 3D reconstruction of scenes and environmental lighting through 2D Gaussian Splatting.<n>Our approach yields high-quality scene reconstructions and enables realistic lighting synthesis under novel environment maps.<n>We validate our method on the NeRF-OSR dataset, demonstrating superior performance over baseline methods.
arXiv Detail & Related papers (2024-08-06T23:41:57Z) - 2D Gaussian Splatting for Geometrically Accurate Radiance Fields [50.056790168812114]
3D Gaussian Splatting (3DGS) has recently revolutionized radiance field reconstruction, achieving high quality novel view synthesis and fast rendering speed without baking.<n>We present 2D Gaussian Splatting (2DGS), a novel approach to model and reconstruct geometrically accurate radiance fields from multi-view images.<n>We demonstrate that our differentiable terms allows for noise-free and detailed geometry reconstruction while maintaining competitive appearance quality, fast training speed, and real-time rendering.
arXiv Detail & Related papers (2024-03-26T17:21:24Z) - SplaTAM: Splat, Track & Map 3D Gaussians for Dense RGB-D SLAM [48.190398577764284]
SplaTAM is an approach to enable high-fidelity reconstruction from a single unposed RGB-D camera.
It employs a simple online tracking and mapping system tailored to the underlying Gaussian representation.
Experiments show that SplaTAM achieves up to 2x superior performance in camera pose estimation, map construction, and novel-view synthesis over existing methods.
arXiv Detail & Related papers (2023-12-04T18:53:24Z) - Towards Practical Capture of High-Fidelity Relightable Avatars [60.25823986199208]
TRAvatar is trained with dynamic image sequences captured in a Light Stage under varying lighting conditions.
It can predict the appearance in real-time with a single forward pass, achieving high-quality relighting effects.
Our framework achieves superior performance for photorealistic avatar animation and relighting.
arXiv Detail & Related papers (2023-09-08T10:26:29Z) - Spatiotemporally Consistent HDR Indoor Lighting Estimation [66.26786775252592]
We propose a physically-motivated deep learning framework to solve the indoor lighting estimation problem.
Given a single LDR image with a depth map, our method predicts spatially consistent lighting at any given image position.
Our framework achieves photorealistic lighting prediction with higher quality compared to state-of-the-art single-image or video-based methods.
arXiv Detail & Related papers (2023-05-07T20:36:29Z) - Learning Indoor Inverse Rendering with 3D Spatially-Varying Lighting [149.1673041605155]
We address the problem of jointly estimating albedo, normals, depth and 3D spatially-varying lighting from a single image.
Most existing methods formulate the task as image-to-image translation, ignoring the 3D properties of the scene.
We propose a unified, learning-based inverse framework that formulates 3D spatially-varying lighting.
arXiv Detail & Related papers (2021-09-13T15:29:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.