Structural and Statistical Texture Knowledge Distillation and Learning for Segmentation
- URL: http://arxiv.org/abs/2503.08043v2
- Date: Sun, 16 Mar 2025 11:26:26 GMT
- Title: Structural and Statistical Texture Knowledge Distillation and Learning for Segmentation
- Authors: Deyi Ji, Feng Zhao, Hongtao Lu, Feng Wu, Jieping Ye,
- Abstract summary: We re-emphasize the low-level texture information in deep networks for semantic segmentation and related knowledge distillation tasks.<n>We propose a novel Structural and Statistical Texture Knowledge Distillation (SSTKD) framework for semantic segmentation.<n>Specifically, Contourlet Decomposition Module (CDM) is introduced to decompose the low-level features.<n> Texture Intensity Equalization Module (TIEM) is designed to extract and enhance the statistical texture knowledge.
- Score: 70.15341084443236
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Low-level texture feature/knowledge is also of vital importance for characterizing the local structural pattern and global statistical properties, such as boundary, smoothness, regularity, and color contrast, which may not be well addressed by high-level deep features. In this paper, we aim to re-emphasize the low-level texture information in deep networks for semantic segmentation and related knowledge distillation tasks. To this end, we take full advantage of both structural and statistical texture knowledge and propose a novel Structural and Statistical Texture Knowledge Distillation (SSTKD) framework for semantic segmentation. Specifically, Contourlet Decomposition Module (CDM) is introduced to decompose the low-level features with iterative Laplacian pyramid and directional filter bank to mine the structural texture knowledge, and Texture Intensity Equalization Module (TIEM) is designed to extract and enhance the statistical texture knowledge with the corresponding Quantization Congruence Loss (QDL). Moreover, we propose the Co-occurrence TIEM (C-TIEM) and generic segmentation frameworks, namely STLNet++ and U-SSNet, to enable existing segmentation networks to harvest the structural and statistical texture information more effectively. Extensive experimental results on three segmentation tasks demonstrate the effectiveness of the proposed methods and their state-of-the-art performance on seven popular benchmark datasets, respectively.
Related papers
- SAFT: Structure-aware Transformers for Textual Interaction Classification [15.022958096869734]
Textual interaction networks (TINs) are an omnipresent data structure used to model the interplay between users and items on e-commerce websites, social networks, etc.
We propose SAFT, a new architecture that integrates language- and graph-based modules for the effective fusion of textual and structural semantics in the representation learning of interactions.
arXiv Detail & Related papers (2025-04-07T09:19:12Z) - HELPNet: Hierarchical Perturbations Consistency and Entropy-guided Ensemble for Scribble Supervised Medical Image Segmentation [4.034121387622003]
We propose HELPNet, a novel scribble-based weakly supervised segmentation framework.<n>HELPNet integrates three modules to bridge the gap between annotation efficiency and segmentation performance.<n>HELPNet significantly outperforms state-of-the-art methods for scribble-based weakly supervised segmentation.
arXiv Detail & Related papers (2024-12-25T01:52:01Z) - Structure-aware Domain Knowledge Injection for Large Language Models [38.08691252042949]
StructTuning is a methodology to transform Large Language Models (LLMs) into domain specialists.<n>It significantly reduces the training corpus needs to a mere 5% while achieving an impressive 100% of traditional knowledge injection performance.
arXiv Detail & Related papers (2024-07-23T12:38:48Z) - Structural and Statistical Texture Knowledge Distillation for Semantic
Segmentation [72.67912031720358]
We propose a novel Structural and Statistical Texture Knowledge Distillation (SSTKD) framework for semantic segmentation.
For structural texture knowledge, we introduce a Contourlet Decomposition Module (CDM) that decomposes low-level features.
For statistical texture knowledge, we propose a Denoised Texture Intensity Equalization Module (DTIEM) to adaptively extract and enhance statistical texture knowledge.
arXiv Detail & Related papers (2023-05-06T06:01:11Z) - Semantic-aware Texture-Structure Feature Collaboration for Underwater
Image Enhancement [58.075720488942125]
Underwater image enhancement has become an attractive topic as a significant technology in marine engineering and aquatic robotics.
We develop an efficient and compact enhancement network in collaboration with a high-level semantic-aware pretrained model.
We also apply the proposed algorithm to the underwater salient object detection task to reveal the favorable semantic-aware ability for high-level vision tasks.
arXiv Detail & Related papers (2022-11-19T07:50:34Z) - Learning Statistical Texture for Semantic Segmentation [53.7443670431132]
We propose a novel Statistical Texture Learning Network (STLNet) for semantic segmentation.
For the first time, STLNet analyzes the distribution of low level information and efficiently utilizes them for the task.
Based on QCO, two modules are introduced: (1) Texture Enhance Module (TEM), to capture texture-related information and enhance the texture details; (2) Pyramid Texture Feature Extraction Module (PTFEM), to effectively extract the statistical texture features from multiple scales.
arXiv Detail & Related papers (2021-03-06T15:05:35Z) - Retinal Image Segmentation with a Structure-Texture Demixing Network [62.69128827622726]
The complex structure and texture information are mixed in a retinal image, and distinguishing the information is difficult.
Existing methods handle texture and structure jointly, which may lead biased models toward recognizing textures and thus results in inferior segmentation performance.
We propose a segmentation strategy that seeks to separate structure and texture components and significantly improve the performance.
arXiv Detail & Related papers (2020-07-15T12:19:03Z) - Towards Analysis-friendly Face Representation with Scalable Feature and
Texture Compression [113.30411004622508]
We show that a universal and collaborative visual information representation can be achieved in a hierarchical way.
Based on the strong generative capability of deep neural networks, the gap between the base feature layer and enhancement layer is further filled with the feature level texture reconstruction.
To improve the efficiency of the proposed framework, the base layer neural network is trained in a multi-task manner.
arXiv Detail & Related papers (2020-04-21T14:32:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.