Content-Aware Texturing for Gaussian Splatting
- URL: http://arxiv.org/abs/2512.02621v1
- Date: Tue, 02 Dec 2025 10:29:10 GMT
- Title: Content-Aware Texturing for Gaussian Splatting
- Authors: Panagiotis Papantonakis, Georgios Kopanas, Fredo Durand, George Drettakis,
- Abstract summary: We propose to use texture to represent detailed appearance where possible.<n>Our main focus is to incorporate per-primitive texture maps that adapt to the scene during Gaussian Splatting optimization.<n>We show that our approach performs favorably in image quality and total number of parameters used compared to alternative solutions.
- Score: 4.861240703958262
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Gaussian Splatting has become the method of choice for 3D reconstruction and real-time rendering of captured real scenes. However, fine appearance details need to be represented as a large number of small Gaussian primitives, which can be wasteful when geometry and appearance exhibit different frequency characteristics. Inspired by the long tradition of texture mapping, we propose to use texture to represent detailed appearance where possible. Our main focus is to incorporate per-primitive texture maps that adapt to the scene in a principled manner during Gaussian Splatting optimization. We do this by proposing a new appearance representation for 2D Gaussian primitives with textures where the size of a texel is bounded by the image sampling frequency and adapted to the content of the input images. We achieve this by adaptively upscaling or downscaling the texture resolution during optimization. In addition, our approach enables control of the number of primitives during optimization based on texture resolution. We show that our approach performs favorably in image quality and total number of parameters used compared to alternative solutions for textured Gaussian primitives. Project page: https://repo-sam.inria.fr/nerphys/gs-texturing/
Related papers
- A$^2$TG: Adaptive Anisotropic Textured Gaussians for Efficient 3D Scene Representation [7.103085444694659]
Existing approaches allocate a fixed square texture per primitive, leading to inefficient memory usage and limited adaptability to scene variability.<n>We introduce adaptive anisotropic textured Gaussians (A$2$TG), a novel representation that generalizes textured Gaussians by equipping each primitive with an anisotropic texture.<n>Our method employs a gradient-guided adaptive rule to jointly determine texture resolution and aspect ratio, enabling non-uniform, detail-aware allocation.
arXiv Detail & Related papers (2026-01-14T07:26:55Z) - Using Gaussian Splats to Create High-Fidelity Facial Geometry and Texture [2.7431069096660736]
We leverage increasingly popular three-dimensional neural representations in order to construct a unified explanation of a collection of uncalibrated images of the human face.<n>We leverage segmentation segmentation to facilitate the reconstruction of a neutral pose from only 11 images.<n>We show how accurate geometry enables the Gaussian Splats to be transformed into texture space where they can be treated as a view-dependent neural texture.
arXiv Detail & Related papers (2025-12-18T10:53:51Z) - Neural Shell Texture Splatting: More Details and Fewer Primitives [37.33701393691611]
We introduce a neural shell texture, a global representation that encodes texture information around the surface.<n>Our evaluation demonstrates that this disentanglement enables high parameter efficiency, fine texture detail reconstruction, and easy textured mesh extraction.
arXiv Detail & Related papers (2025-07-27T09:39:10Z) - Textured Gaussians for Enhanced 3D Scene Appearance Modeling [58.134905268540436]
3D Gaussian Splatting (3DGS) has emerged as a state-of-the-art 3D reconstruction and rendering technique.<n>We propose a new generalized Gaussian appearance representation that augments each Gaussian with alpha(A), RGB, or RGBA texture maps.<n>We demonstrate image quality improvements over existing methods while using a similar or lower number of Gaussians.
arXiv Detail & Related papers (2024-11-27T18:59:59Z) - GPS-Gaussian+: Generalizable Pixel-wise 3D Gaussian Splatting for Real-Time Human-Scene Rendering from Sparse Views [67.34073368933814]
We propose a generalizable Gaussian Splatting approach for high-resolution image rendering under a sparse-view camera setting.
We train our Gaussian parameter regression module on human-only data or human-scene data, jointly with a depth estimation module to lift 2D parameter maps to 3D space.
Experiments on several datasets demonstrate that our method outperforms state-of-the-art methods while achieving an exceeding rendering speed.
arXiv Detail & Related papers (2024-11-18T08:18:44Z) - SCube: Instant Large-Scale Scene Reconstruction using VoxSplats [55.383993296042526]
We present SCube, a novel method for reconstructing large-scale 3D scenes (geometry, appearance, and semantics) from a sparse set of posed images.
Our method encodes reconstructed scenes using a novel representation VoxSplat, which is a set of 3D Gaussians supported on a high-resolution sparse-voxel scaffold.
arXiv Detail & Related papers (2024-10-26T00:52:46Z) - GStex: Per-Primitive Texturing of 2D Gaussian Splatting for Decoupled Appearance and Geometry Modeling [11.91812502521729]
Gaussian splatting has demonstrated excellent performance for view synthesis and scene reconstruction.<n>Since each Gaussian primitive encodes both appearance and geometry, appearance modeling requires a number of Gaussian primitives.<n>We propose to employ perprimitive representation so that even a single Gaussian can be used to capture appearance details.
arXiv Detail & Related papers (2024-09-19T17:58:44Z) - Reference-based Controllable Scene Stylization with Gaussian Splatting [30.321151430263946]
Referenced-based scene stylization that edits the appearance based on a content-aligned reference image is an emerging research area.
We propose ReGS, which adapts 3D Gaussian Splatting (3DGS) for reference-based stylization to enable real-time stylized view synthesis.
arXiv Detail & Related papers (2024-07-09T20:30:29Z) - Hybrid Explicit Representation for Ultra-Realistic Head Avatars [55.829497543262214]
We introduce a novel approach to creating ultra-realistic head avatars and rendering them in real-time.<n> UV-mapped 3D mesh is utilized to capture sharp and rich textures on smooth surfaces, while 3D Gaussian Splatting is employed to represent complex geometric structures.<n>Experiments that our modeled results exceed those of state-of-the-art approaches.
arXiv Detail & Related papers (2024-03-18T04:01:26Z) - VastGaussian: Vast 3D Gaussians for Large Scene Reconstruction [59.40711222096875]
We present VastGaussian, the first method for high-quality reconstruction and real-time rendering on large scenes based on 3D Gaussian Splatting.
Our approach outperforms existing NeRF-based methods and achieves state-of-the-art results on multiple large scene datasets.
arXiv Detail & Related papers (2024-02-27T11:40:50Z) - Paint-it: Text-to-Texture Synthesis via Deep Convolutional Texture Map Optimization and Physically-Based Rendering [47.78392889256976]
Paint-it is a text-driven high-fidelity texture map synthesis method for 3D rendering.
Paint-it synthesizes texture maps from a text description by synthesis-through-optimization, exploiting the Score-Distillation Sampling (SDS)
We show that DC-PBR inherently schedules the optimization curriculum according to texture frequency and naturally filters out the noisy signals from SDS.
arXiv Detail & Related papers (2023-12-18T17:17:08Z) - Differentiable Blocks World: Qualitative 3D Decomposition by Rendering
Primitives [70.32817882783608]
We present an approach that produces a simple, compact, and actionable 3D world representation by means of 3D primitives.
Unlike existing primitive decomposition methods that rely on 3D input data, our approach operates directly on images.
We show that the resulting textured primitives faithfully reconstruct the input images and accurately model the visible 3D points.
arXiv Detail & Related papers (2023-07-11T17:58:31Z) - Controllable Person Image Synthesis with Spatially-Adaptive Warped
Normalization [72.65828901909708]
Controllable person image generation aims to produce realistic human images with desirable attributes.
We introduce a novel Spatially-Adaptive Warped Normalization (SAWN), which integrates a learned flow-field to warp modulation parameters.
We propose a novel self-training part replacement strategy to refine the pretrained model for the texture-transfer task.
arXiv Detail & Related papers (2021-05-31T07:07:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.