3D-TexSeg: Unsupervised Segmentation of 3D Texture using Mutual
Transformer Learning
- URL: http://arxiv.org/abs/2311.10651v1
- Date: Fri, 17 Nov 2023 17:13:14 GMT
- Title: 3D-TexSeg: Unsupervised Segmentation of 3D Texture using Mutual
Transformer Learning
- Authors: Iyyakutti Iyappan Ganapathi, Fayaz Ali, Sajid Javed, Syed Sadaf Ali,
Naoufel Werghi
- Abstract summary: This paper presents an original framework for the unsupervised segmentation of the 3D texture on the mesh manifold.
We devise a mutual transformer-based system comprising a label generator and a cleaner.
Experiments on three publicly available datasets with diverse texture patterns demonstrate that the proposed framework outperforms standard and SOTA unsupervised techniques.
- Score: 11.510823733292519
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Analysis of the 3D Texture is indispensable for various tasks, such as
retrieval, segmentation, classification, and inspection of sculptures, knitted
fabrics, and biological tissues. A 3D texture is a locally repeated surface
variation independent of the surface's overall shape and can be determined
using the local neighborhood and its characteristics. Existing techniques
typically employ computer vision techniques that analyze a 3D mesh globally,
derive features, and then utilize the obtained features for retrieval or
classification. Several traditional and learning-based methods exist in the
literature, however, only a few are on 3D texture, and nothing yet, to the best
of our knowledge, on the unsupervised schemes. This paper presents an original
framework for the unsupervised segmentation of the 3D texture on the mesh
manifold. We approach this problem as binary surface segmentation, partitioning
the mesh surface into textured and non-textured regions without prior
annotation. We devise a mutual transformer-based system comprising a label
generator and a cleaner. The two models take geometric image representations of
the surface mesh facets and label them as texture or non-texture across an
iterative mutual learning scheme. Extensive experiments on three publicly
available datasets with diverse texture patterns demonstrate that the proposed
framework outperforms standard and SOTA unsupervised techniques and competes
reasonably with supervised methods.
Related papers
- 3DTextureTransformer: Geometry Aware Texture Generation for Arbitrary
Mesh Topology [1.4349415652822481]
Learning to generate textures for a novel 3D mesh given a collection of 3D meshes and real-world 2D images is an important problem with applications in various domains such as 3D simulation, augmented and virtual reality, gaming, architecture, and design.
Existing solutions either do not produce high-quality textures or deform the original high-resolution input mesh topology into a regular grid to make this generation easier but also lose the original mesh topology.
We present a novel framework called the 3DTextureTransformer that enables us to generate high-quality textures without deforming the original, high-resolution input mesh.
arXiv Detail & Related papers (2024-03-07T05:01:07Z) - TEXTure: Text-Guided Texturing of 3D Shapes [71.13116133846084]
We present TEXTure, a novel method for text-guided editing, editing, and transfer of textures for 3D shapes.
We define a trimap partitioning process that generates seamless 3D textures without requiring explicit surface textures.
arXiv Detail & Related papers (2023-02-03T13:18:45Z) - MvDeCor: Multi-view Dense Correspondence Learning for Fine-grained 3D
Segmentation [91.6658845016214]
We propose to utilize self-supervised techniques in the 2D domain for fine-grained 3D shape segmentation tasks.
We render a 3D shape from multiple views, and set up a dense correspondence learning task within the contrastive learning framework.
As a result, the learned 2D representations are view-invariant and geometrically consistent.
arXiv Detail & Related papers (2022-08-18T00:48:15Z) - Single-view 3D Mesh Reconstruction for Seen and Unseen Categories [69.29406107513621]
Single-view 3D Mesh Reconstruction is a fundamental computer vision task that aims at recovering 3D shapes from single-view RGB images.
This paper tackles Single-view 3D Mesh Reconstruction, to study the model generalization on unseen categories.
We propose an end-to-end two-stage network, GenMesh, to break the category boundaries in reconstruction.
arXiv Detail & Related papers (2022-08-04T14:13:35Z) - Monocular 3D Object Reconstruction with GAN Inversion [122.96094885939146]
MeshInversion is a novel framework to improve the reconstruction of textured 3D meshes.
It exploits the generative prior of a 3D GAN pre-trained for 3D textured mesh synthesis.
Our framework obtains faithful 3D reconstructions with consistent geometry and texture across both observed and unobserved parts.
arXiv Detail & Related papers (2022-07-20T17:47:22Z) - Texturify: Generating Textures on 3D Shape Surfaces [34.726179801982646]
We propose Texturify to learn a 3D shape that predicts texture on the 3D input.
Our method does not require any 3D color supervision to learn 3D objects.
arXiv Detail & Related papers (2022-04-05T18:00:04Z) - Fine Detailed Texture Learning for 3D Meshes with Generative Models [33.42114674602613]
This paper presents a method to reconstruct high-quality textured 3D models from both multi-view and single-view images.
In the first stage, we focus on learning accurate geometry, whereas in the second stage, we focus on learning the texture with a generative adversarial network.
We demonstrate that our method achieves superior 3D textured models compared to the previous works.
arXiv Detail & Related papers (2022-03-17T14:50:52Z) - 3DBooSTeR: 3D Body Shape and Texture Recovery [76.91542440942189]
3DBooSTeR is a novel method to recover a textured 3D body mesh from a partial 3D scan.
The proposed approach decouples the shape and texture completion into two sequential tasks.
arXiv Detail & Related papers (2020-10-23T21:07:59Z) - Deep Geometric Texture Synthesis [83.9404865744028]
We propose a novel framework for synthesizing geometric textures.
It learns texture statistics from local neighborhoods of a single reference 3D model.
Our network displaces mesh vertices in any direction, enabling synthesis of geometric textures.
arXiv Detail & Related papers (2020-06-30T19:36:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.