GeoScaler: Geometry and Rendering-Aware Downsampling of 3D Mesh Textures
- URL: http://arxiv.org/abs/2311.16581v2
- Date: Wed, 20 Mar 2024 00:59:44 GMT
- Title: GeoScaler: Geometry and Rendering-Aware Downsampling of 3D Mesh Textures
- Authors: Sai Karthikey Pentapati, Anshul Rai, Arkady Ten, Chaitanya Atluru, Alan Bovik,
- Abstract summary: High-resolution texture maps are necessary for representing real-world objects accurately with 3D meshes.
GeoScaler is a method of downsampling texture maps of 3D meshes while incorporating geometric cues.
We show that the textures generated by GeoScaler deliver significantly better quality rendered images compared to those generated by traditional downsampling methods.
- Score: 0.06990493129893112
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: High-resolution texture maps are necessary for representing real-world objects accurately with 3D meshes. The large sizes of textures can bottleneck the real-time rendering of high-quality virtual 3D scenes on devices having low computational budgets and limited memory. Downsampling the texture maps directly addresses the issue, albeit at the cost of visual fidelity. Traditionally, downsampling of texture maps is performed using methods like bicubic interpolation and the Lanczos algorithm. These methods ignore the geometric layout of the mesh and its UV parametrization and also do not account for the rendering process used to obtain the final visualization that the users will experience. Towards filling these gaps, we introduce GeoScaler, which is a method of downsampling texture maps of 3D meshes while incorporating geometric cues, and by maximizing the visual fidelity of the rendered views of the textured meshes. We show that the textures generated by GeoScaler deliver significantly better quality rendered images compared to those generated by traditional downsampling methods
Related papers
- Bridging 3D Gaussian and Mesh for Freeview Video Rendering [57.21847030980905]
GauMesh bridges the 3D Gaussian and Mesh for modeling and rendering the dynamic scenes.
We show that our approach adapts the appropriate type of primitives to represent the different parts of the dynamic scene.
arXiv Detail & Related papers (2024-03-18T04:01:26Z) - Texture-GS: Disentangling the Geometry and Texture for 3D Gaussian Splatting Editing [79.10630153776759]
3D Gaussian splatting, emerging as a groundbreaking approach, has drawn increasing attention for its capabilities of high-fidelity reconstruction and real-time rendering.
We propose a novel approach, namely Texture-GS, to disentangle the appearance from the geometry by representing it as a 2D texture mapped onto the 3D surface.
Our method not only facilitates high-fidelity appearance editing but also achieves real-time rendering on consumer-level devices.
arXiv Detail & Related papers (2024-03-15T06:42:55Z) - 3DTextureTransformer: Geometry Aware Texture Generation for Arbitrary
Mesh Topology [1.4349415652822481]
Learning to generate textures for a novel 3D mesh given a collection of 3D meshes and real-world 2D images is an important problem with applications in various domains such as 3D simulation, augmented and virtual reality, gaming, architecture, and design.
Existing solutions either do not produce high-quality textures or deform the original high-resolution input mesh topology into a regular grid to make this generation easier but also lose the original mesh topology.
We present a novel framework called the 3DTextureTransformer that enables us to generate high-quality textures without deforming the original, high-resolution input mesh.
arXiv Detail & Related papers (2024-03-07T05:01:07Z) - DragTex: Generative Point-Based Texture Editing on 3D Mesh [11.163205302136625]
We propose a generative point-based 3D mesh texture editing method called DragTex.
This method utilizes a diffusion model to blend locally inconsistent textures in the region near the deformed silhouette between different views.
We train LoRA using multi-view images instead of training each view individually, which significantly shortens the training time.
arXiv Detail & Related papers (2024-03-04T17:05:01Z) - TEXTure: Text-Guided Texturing of 3D Shapes [71.13116133846084]
We present TEXTure, a novel method for text-guided editing, editing, and transfer of textures for 3D shapes.
We define a trimap partitioning process that generates seamless 3D textures without requiring explicit surface textures.
arXiv Detail & Related papers (2023-02-03T13:18:45Z) - High-fidelity 3D GAN Inversion by Pseudo-multi-view Optimization [51.878078860524795]
We present a high-fidelity 3D generative adversarial network (GAN) inversion framework that can synthesize photo-realistic novel views.
Our approach enables high-fidelity 3D rendering from a single image, which is promising for various applications of AI-generated 3D content.
arXiv Detail & Related papers (2022-11-28T18:59:52Z) - AUV-Net: Learning Aligned UV Maps for Texture Transfer and Synthesis [78.17671694498185]
We propose AUV-Net which learns to embed 3D surfaces into a 2D aligned UV space.
As a result, textures are aligned across objects, and can thus be easily synthesized by generative models of images.
The learned UV mapping and aligned texture representations enable a variety of applications including texture transfer, texture synthesis, and textured single view 3D reconstruction.
arXiv Detail & Related papers (2022-04-06T21:39:24Z) - Projective Urban Texturing [8.349665441428925]
We propose a method for automatic generation of textures for 3D city meshes in immersive urban environments.
Projective Urban Texturing (PUT) re-targets textural style from real-world panoramic images to unseen urban meshes.
PUT relies on contrastive and adversarial training of a neural architecture designed for unpaired image-to-texture translation.
arXiv Detail & Related papers (2022-01-25T14:56:52Z) - NeuTex: Neural Texture Mapping for Volumetric Neural Rendering [48.83181790635772]
We present an approach that explicitly disentangles geometry--represented as a continuous 3D volume--from appearance--represented as a continuous 2D texture map.
We demonstrate that this representation can be reconstructed using only multi-view image supervision and generates high-quality rendering results.
arXiv Detail & Related papers (2021-03-01T05:34:51Z) - Real Time Incremental Foveal Texture Mapping for Autonomous Vehicles [11.702817783491616]
The generated detailed map serves as a virtual test bed for various vision and planning algorithms.
It can also serve as a background map for various vision and planning algorithms.
arXiv Detail & Related papers (2021-01-16T07:41:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.