TextureSAM: Towards a Texture Aware Foundation Model for Segmentation
- URL: http://arxiv.org/abs/2505.16540v1
- Date: Thu, 22 May 2025 11:31:56 GMT
- Title: TextureSAM: Towards a Texture Aware Foundation Model for Segmentation
- Authors: Inbal Cohen, Boaz Meivar, Peihan Tu, Shai Avidan, Gal Oren,
- Abstract summary: Segment Anything Models (SAM) have achieved remarkable success in object segmentation tasks across diverse datasets.<n>SAM is predominantly trained on large-scale semantic segmentation datasets.<n>This limitation is critical in domains such as medical imaging, material classification, and remote sensing.<n>We introduce a new texture-aware foundation model, TextureSAM, which performs superior segmentation in texture-dominant scenarios.
- Score: 10.97856946049713
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Segment Anything Models (SAM) have achieved remarkable success in object segmentation tasks across diverse datasets. However, these models are predominantly trained on large-scale semantic segmentation datasets, which introduce a bias toward object shape rather than texture cues in the image. This limitation is critical in domains such as medical imaging, material classification, and remote sensing, where texture changes define object boundaries. In this study, we investigate SAM's bias toward semantics over textures and introduce a new texture-aware foundation model, TextureSAM, which performs superior segmentation in texture-dominant scenarios. To achieve this, we employ a novel fine-tuning approach that incorporates texture augmentation techniques, incrementally modifying training images to emphasize texture features. By leveraging a novel texture-alternation of the ADE20K dataset, we guide TextureSAM to prioritize texture-defined regions, thereby mitigating the inherent shape bias present in the original SAM model. Our extensive experiments demonstrate that TextureSAM significantly outperforms SAM-2 on both natural (+0.2 mIoU) and synthetic (+0.18 mIoU) texture-based segmentation datasets. The code and texture-augmented dataset will be publicly available.
Related papers
- TriTex: Learning Texture from a Single Mesh via Triplane Semantic Features [78.13246375582906]
We present a novel approach that learns a volumetric texture field from a single textured mesh by mapping semantic features to surface target colors.<n>Our approach achieves superior texture quality across 3D models in applications like game development.
arXiv Detail & Related papers (2025-03-20T18:35:03Z) - Err on the Side of Texture: Texture Bias on Real Data [3.5990273573803058]
We introduce the Texture Association Value (TAV), a novel metric that quantifies how strongly models rely on the presence of specific textures when classifying objects.<n>Our results show that texture bias explains the existence of natural adversarial examples, where over 90% of these samples contain textures that are misaligned with the learned texture of their true label.
arXiv Detail & Related papers (2024-12-13T22:53:16Z) - NeRF-Texture: Synthesizing Neural Radiance Field Textures [77.24205024987414]
We propose a novel texture synthesis method with Neural Radiance Fields (NeRF) to capture and synthesize textures from given multi-view images.<n>In the proposed NeRF texture representation, a scene with fine geometric details is disentangled into the meso-structure textures and the underlying base shape.<n>We can synthesize NeRF-based textures through patch matching of latent features.
arXiv Detail & Related papers (2024-12-13T09:41:48Z) - Textured Mesh Saliency: Bridging Geometry and Texture for Human Perception in 3D Graphics [50.23625950905638]
We present a new dataset for textured mesh saliency, created through an innovative eye-tracking experiment in a six degrees of freedom (6-DOF) VR environment.<n>Our proposed model predicts saliency maps for textured mesh surfaces by treating each triangular face as an individual unit and assigning a saliency density value to reflect the importance of each local surface region.
arXiv Detail & Related papers (2024-12-11T08:27:33Z) - Infinite Texture: Text-guided High Resolution Diffusion Texture Synthesis [61.189479577198846]
We present Infinite Texture, a method for generating arbitrarily large texture images from a text prompt.
Our approach fine-tunes a diffusion model on a single texture, and learns to embed that statistical distribution in the output domain of the model.
At generation time, our fine-tuned diffusion model is used through a score aggregation strategy to generate output texture images of arbitrary resolution on a single GPU.
arXiv Detail & Related papers (2024-05-13T21:53:09Z) - TextureDreamer: Image-guided Texture Synthesis through Geometry-aware
Diffusion [64.49276500129092]
TextureDreamer is an image-guided texture synthesis method.
It can transfer relightable textures from a small number of input images to target 3D shapes across arbitrary categories.
arXiv Detail & Related papers (2024-01-17T18:55:49Z) - Learning Statistical Texture for Semantic Segmentation [53.7443670431132]
We propose a novel Statistical Texture Learning Network (STLNet) for semantic segmentation.
For the first time, STLNet analyzes the distribution of low level information and efficiently utilizes them for the task.
Based on QCO, two modules are introduced: (1) Texture Enhance Module (TEM), to capture texture-related information and enhance the texture details; (2) Pyramid Texture Feature Extraction Module (PTFEM), to effectively extract the statistical texture features from multiple scales.
arXiv Detail & Related papers (2021-03-06T15:05:35Z) - Learning Texture Invariant Representation for Domain Adaptation of
Semantic Segmentation [19.617821473205694]
It is challenging for a model trained with synthetic data to generalize to real data.
We diversity the texture of synthetic images using a style transfer algorithm.
We fine-tune the model with self-training to get direct supervision of the target texture.
arXiv Detail & Related papers (2020-03-02T13:11:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.