TeSO: Representing and Compressing 3D Point Cloud Scenes with Textured Surfel Octree
- URL: http://arxiv.org/abs/2508.07083v1
- Date: Sat, 09 Aug 2025 19:37:43 GMT
- Title: TeSO: Representing and Compressing 3D Point Cloud Scenes with Textured Surfel Octree
- Authors: Yueyu Hu, Ran Gong, Tingyu Fan, Yao Wang,
- Abstract summary: 3D visual content streaming is a key technology for emerging 3D telepresence and AR/VR applications.<n>Existing 3D representations like point clouds, meshes and 3D Gaussians each have limitations in terms of rendering quality, surface definition, and compressibility.<n>We present the Textured Surfel Octree (TeSO), a novel 3D representation that is built from point clouds but addresses the aforementioned limitations.
- Score: 10.435212618849544
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D visual content streaming is a key technology for emerging 3D telepresence and AR/VR applications. One fundamental element underlying the technology is a versatile 3D representation that is capable of producing high-quality renders and can be efficiently compressed at the same time. Existing 3D representations like point clouds, meshes and 3D Gaussians each have limitations in terms of rendering quality, surface definition, and compressibility. In this paper, we present the Textured Surfel Octree (TeSO), a novel 3D representation that is built from point clouds but addresses the aforementioned limitations. It represents a 3D scene as cube-bounded surfels organized on an octree, where each surfel is further associated with a texture patch. By approximating a smooth surface with a large surfel at a coarser level of the octree, it reduces the number of primitives required to represent the 3D scene, and yet retains the high-frequency texture details through the texture map attached to each surfel. We further propose a compression scheme to encode the geometry and texture efficiently, leveraging the octree structure. The proposed textured surfel octree combined with the compression scheme achieves higher rendering quality at lower bit-rates compared to multiple point cloud and 3D Gaussian-based baselines.
Related papers
- TexSpot: 3D Texture Enhancement with Spatially-uniform Point Latent Representation [47.87566902467006]
We introduce TexSpot, a diffusion-based texture enhancement framework.<n>At its core is Texlet, a novel 3D texture representation.<n>A cascaded 3D-to-2D decoder reconstructs high-quality texture patches.
arXiv Detail & Related papers (2026-02-12T16:37:31Z) - SuperCarver: Texture-Consistent 3D Geometry Super-Resolution for High-Fidelity Surface Detail Generation [70.76810765911499]
We introduce SuperCarver, a 3D geometry super-resolution pipeline for supplementing texture-consistent surface details onto a given coarse mesh.<n> Experiments demonstrate that our SuperCarver is capable of generating realistic and expressive surface details depicted by the actual texture appearance.
arXiv Detail & Related papers (2025-03-12T14:38:45Z) - GaussianAnything: Interactive Point Cloud Flow Matching For 3D Object Generation [75.39457097832113]
This paper introduces a novel 3D generation framework, offering scalable, high-quality 3D generation with an interactive Point Cloud-structured Latent space.<n>Our framework employs a Variational Autoencoder with multi-view posed RGB-D(epth)-N(ormal) renderings as input, using a unique latent space design that preserves 3D shape information.<n>The proposed method, GaussianAnything, supports multi-modal conditional 3D generation, allowing for point cloud, caption, and single image inputs.
arXiv Detail & Related papers (2024-11-12T18:59:32Z) - DreamMesh: Jointly Manipulating and Texturing Triangle Meshes for Text-to-3D Generation [149.77077125310805]
We present DreamMesh, a novel text-to-3D architecture that pivots on well-defined surfaces (triangle meshes) to generate high-fidelity explicit 3D model.
In the coarse stage, the mesh is first deformed by text-guided Jacobians and then DreamMesh textures the mesh with an interlaced use of 2D diffusion models.
In the fine stage, DreamMesh jointly manipulates the mesh and refines the texture map, leading to high-quality triangle meshes with high-fidelity textured materials.
arXiv Detail & Related papers (2024-09-11T17:59:02Z) - Meta 3D Gen [57.313835190702484]
3DGen offers 3D asset creation with high prompt fidelity and high-quality 3D shapes and textures in under a minute.
3DGen supports physically-based rendering (PBR), necessary for 3D asset relighting in real-world applications.
arXiv Detail & Related papers (2024-07-02T18:37:52Z) - GeoScaler: Geometry and Rendering-Aware Downsampling of 3D Mesh Textures [0.06990493129893112]
High-resolution texture maps are necessary for representing real-world objects accurately with 3D meshes.
GeoScaler is a method of downsampling texture maps of 3D meshes while incorporating geometric cues.
We show that the textures generated by GeoScaler deliver significantly better quality rendered images compared to those generated by traditional downsampling methods.
arXiv Detail & Related papers (2023-11-28T07:55:25Z) - AUV-Net: Learning Aligned UV Maps for Texture Transfer and Synthesis [78.17671694498185]
We propose AUV-Net which learns to embed 3D surfaces into a 2D aligned UV space.
As a result, textures are aligned across objects, and can thus be easily synthesized by generative models of images.
The learned UV mapping and aligned texture representations enable a variety of applications including texture transfer, texture synthesis, and textured single view 3D reconstruction.
arXiv Detail & Related papers (2022-04-06T21:39:24Z) - Implicit Feature Networks for Texture Completion from Partial 3D Data [56.93289686162015]
We generalize IF-Nets to texture completion from partial textured scans of humans and arbitrary objects.
Our model successfully in-paints the missing texture parts in consistence with the completed geometry.
arXiv Detail & Related papers (2020-09-20T15:48:17Z) - On Demand Solid Texture Synthesis Using Deep 3D Networks [3.1542695050861544]
This paper describes a novel approach for on demand texture synthesis based on a deep learning framework.
A generative network is trained to synthesize coherent portions of solid textures of arbitrary sizes.
The synthesized volumes have good visual results that are at least equivalent to the state-of-the-art patch based approaches.
arXiv Detail & Related papers (2020-01-13T20:59:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.