Mesh-Learner: Texturing Mesh with Spherical Harmonics
- URL: http://arxiv.org/abs/2504.19938v3
- Date: Mon, 25 Aug 2025 03:15:30 GMT
- Title: Mesh-Learner: Texturing Mesh with Spherical Harmonics
- Authors: Yunfei Wan, Jianheng Liu, Chunran Zheng, Jiarong Lin, Fu Zhang,
- Abstract summary: Mesh-Learner is a 3D reconstruction and rendering framework that is compatible with traditionalization pipelines.<n>It integrates mesh and spherical harmonic (SH) texture into the learning process to learn each mesh s view-dependent end-to-end.
- Score: 12.211254660151637
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present a 3D reconstruction and rendering framework termed Mesh-Learner that is natively compatible with traditional rasterization pipelines. It integrates mesh and spherical harmonic (SH) texture (i.e., texture filled with SH coefficients) into the learning process to learn each mesh s view-dependent radiance end-to-end. Images are rendered by interpolating surrounding SH Texels at each pixel s sampling point using a novel interpolation method. Conversely, gradients from each pixel are back-propagated to the related SH Texels in SH textures. Mesh-Learner exploits graphic features of rasterization pipeline (texture sampling, deferred rendering) to render, which makes Mesh-Learner naturally compatible with tools (e.g., Blender) and tasks (e.g., 3D reconstruction, scene rendering, reinforcement learning for robotics) that are based on rasterization pipelines. Our system can train vast, unlimited scenes because we transfer only the SH textures within the frustum to the GPU for training. At other times, the SH textures are stored in CPU RAM, which results in moderate GPU memory usage. The rendering results on interpolation and extrapolation sequences in the Replica and FAST-LIVO2 datasets achieve state-of-the-art performance compared to existing state-of-the-art methods (e.g., 3D Gaussian Splatting and M2-Mapping). To benefit the society, the code will be available at https://github.com/hku-mars/Mesh-Learner.
Related papers
- MeshSplatting: Differentiable Rendering with Opaque Meshes [59.240722437975755]
We present MeshSplatting, a mesh-based reconstruction approach that jointly optimize geometry and appearance through differentiable rendering.<n>On Mip-NeRF360, it boosts PSNR by +0.69 dB over the current state-of-the-art MiLo for mesh-based novel view synthesis.
arXiv Detail & Related papers (2025-12-07T12:31:04Z) - TriTex: Learning Texture from a Single Mesh via Triplane Semantic Features [78.13246375582906]
We present a novel approach that learns a volumetric texture field from a single textured mesh by mapping semantic features to surface target colors.<n>Our approach achieves superior texture quality across 3D models in applications like game development.
arXiv Detail & Related papers (2025-03-20T18:35:03Z) - TexGaussian: Generating High-quality PBR Material via Octree-based 3D Gaussian Splatting [48.97819552366636]
This paper presents TexGaussian, a novel method that uses octant-aligned 3D Gaussian Splatting for rapid PBR material generation.
Our method synthesizes more visually pleasing PBR materials and runs faster than previous methods in both unconditional and text-conditional scenarios.
arXiv Detail & Related papers (2024-11-29T12:19:39Z) - Hybrid Explicit Representation for Ultra-Realistic Head Avatars [55.829497543262214]
We introduce a novel approach to creating ultra-realistic head avatars and rendering them in real-time.<n> UV-mapped 3D mesh is utilized to capture sharp and rich textures on smooth surfaces, while 3D Gaussian Splatting is employed to represent complex geometric structures.<n>Experiments that our modeled results exceed those of state-of-the-art approaches.
arXiv Detail & Related papers (2024-03-18T04:01:26Z) - Texture-GS: Disentangling the Geometry and Texture for 3D Gaussian Splatting Editing [79.10630153776759]
3D Gaussian splatting, emerging as a groundbreaking approach, has drawn increasing attention for its capabilities of high-fidelity reconstruction and real-time rendering.
We propose a novel approach, namely Texture-GS, to disentangle the appearance from the geometry by representing it as a 2D texture mapped onto the 3D surface.
Our method not only facilitates high-fidelity appearance editing but also achieves real-time rendering on consumer-level devices.
arXiv Detail & Related papers (2024-03-15T06:42:55Z) - GeoScaler: Geometry and Rendering-Aware Downsampling of 3D Mesh Textures [0.06990493129893112]
High-resolution texture maps are necessary for representing real-world objects accurately with 3D meshes.
GeoScaler is a method of downsampling texture maps of 3D meshes while incorporating geometric cues.
We show that the textures generated by GeoScaler deliver significantly better quality rendered images compared to those generated by traditional downsampling methods.
arXiv Detail & Related papers (2023-11-28T07:55:25Z) - Mesh Neural Cellular Automata [62.101063045659906]
We propose Mesh Neural Cellular Automata (MeshNCA), a method that directly synthesizes dynamic textures on 3D meshes without requiring any UV maps.
Only trained on an Icosphere mesh, MeshNCA shows remarkable test-time generalization and can synthesize textures on unseen meshes in real time.
arXiv Detail & Related papers (2023-11-06T01:54:37Z) - TexFusion: Synthesizing 3D Textures with Text-Guided Image Diffusion
Models [77.85129451435704]
We present a new method to synthesize textures for 3D, using large-scale-guided image diffusion models.
Specifically, we leverage latent diffusion models, apply the set denoising model and aggregate denoising text map.
arXiv Detail & Related papers (2023-10-20T19:15:29Z) - AUV-Net: Learning Aligned UV Maps for Texture Transfer and Synthesis [78.17671694498185]
We propose AUV-Net which learns to embed 3D surfaces into a 2D aligned UV space.
As a result, textures are aligned across objects, and can thus be easily synthesized by generative models of images.
The learned UV mapping and aligned texture representations enable a variety of applications including texture transfer, texture synthesis, and textured single view 3D reconstruction.
arXiv Detail & Related papers (2022-04-06T21:39:24Z) - On Demand Solid Texture Synthesis Using Deep 3D Networks [3.1542695050861544]
This paper describes a novel approach for on demand texture synthesis based on a deep learning framework.
A generative network is trained to synthesize coherent portions of solid textures of arbitrary sizes.
The synthesized volumes have good visual results that are at least equivalent to the state-of-the-art patch based approaches.
arXiv Detail & Related papers (2020-01-13T20:59:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.