Retinal Image Segmentation with a Structure-Texture Demixing Network
- URL: http://arxiv.org/abs/2008.00817v1
- Date: Wed, 15 Jul 2020 12:19:03 GMT
- Title: Retinal Image Segmentation with a Structure-Texture Demixing Network
- Authors: Shihao Zhang, Huazhu Fu, Yanwu Xu, Yanxia Liu, Mingkui Tan
- Abstract summary: The complex structure and texture information are mixed in a retinal image, and distinguishing the information is difficult.
Existing methods handle texture and structure jointly, which may lead biased models toward recognizing textures and thus results in inferior segmentation performance.
We propose a segmentation strategy that seeks to separate structure and texture components and significantly improve the performance.
- Score: 62.69128827622726
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Retinal image segmentation plays an important role in automatic disease
diagnosis. This task is very challenging because the complex structure and
texture information are mixed in a retinal image, and distinguishing the
information is difficult. Existing methods handle texture and structure
jointly, which may lead biased models toward recognizing textures and thus
results in inferior segmentation performance. To address it, we propose a
segmentation strategy that seeks to separate structure and texture components
and significantly improve the performance. To this end, we design a
structure-texture demixing network (STD-Net) that can process structures and
textures differently and better. Extensive experiments on two retinal image
segmentation tasks (i.e., blood vessel segmentation, optic disc and cup
segmentation) demonstrate the effectiveness of the proposed method.
Related papers
- Synthesizing Multi-Class Surgical Datasets with Anatomy-Aware Diffusion Models [1.9085155846692308]
In computer-assisted surgery, automatically recognizing anatomical organs is crucial for understanding the surgical scene.
While machine learning models can identify such structures, their deployment is hindered by the need for labeled, diverse surgical datasets.
We introduce a multi-stage approach using diffusion models to generate multi-class surgical datasets with annotations.
arXiv Detail & Related papers (2024-10-10T09:29:23Z) - Enhancing Cross-Modal Medical Image Segmentation through Compositionality [0.4194295877935868]
We introduce compositionality as an inductive bias in a cross-modal segmentation network to improve segmentation performance and interpretability.
The proposed network enforces compositionality on the learned representations using learnable von Mises-Fisher kernels.
The experimental results demonstrate enhanced segmentation performance and reduced computational costs on multiple medical datasets.
arXiv Detail & Related papers (2024-08-21T15:57:24Z) - Systematic review of image segmentation using complex networks [1.3053649021965603]
This review presents various image segmentation methods using complex networks.
In computer vision and image processing applications, image segmentation is essential for analyzing complex images.
arXiv Detail & Related papers (2024-01-05T11:14:07Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - Joint Learning of Deep Texture and High-Frequency Features for
Computer-Generated Image Detection [24.098604827919203]
We propose a joint learning strategy with deep texture and high-frequency features for CG image detection.
A semantic segmentation map is generated to guide the affine transformation operation.
The combination of the original image and the high-frequency components of the original and rendered images are fed into a multi-branch neural network equipped with attention mechanisms.
arXiv Detail & Related papers (2022-09-07T17:30:40Z) - Image Inpainting Guided by Coherence Priors of Semantics and Textures [62.92586889409379]
We introduce coherence priors between the semantics and textures which make it possible to concentrate on completing separate textures in a semantic-wise manner.
We also propose two coherence losses to constrain the consistency between the semantics and the inpainted image in terms of the overall structure and detailed textures.
arXiv Detail & Related papers (2020-12-15T02:59:37Z) - Interactive Deep Refinement Network for Medical Image Segmentation [13.698408475104452]
We propose an interactive deep refinement framework to improve the traditional semantic segmentation networks.
In the proposed framework, we added a refinement network to traditional segmentation network to refine the results.
Experimental results with public dataset revealed that the proposed method could achieve higher accuracy than other state-of-the-art methods.
arXiv Detail & Related papers (2020-06-27T08:24:46Z) - Region-adaptive Texture Enhancement for Detailed Person Image Synthesis [86.69934638569815]
RATE-Net is a novel framework for synthesizing person images with sharp texture details.
The proposed framework leverages an additional texture enhancing module to extract appearance information from the source image.
Experiments conducted on DeepFashion benchmark dataset have demonstrated the superiority of our framework compared with existing networks.
arXiv Detail & Related papers (2020-05-26T02:33:21Z) - Towards Analysis-friendly Face Representation with Scalable Feature and
Texture Compression [113.30411004622508]
We show that a universal and collaborative visual information representation can be achieved in a hierarchical way.
Based on the strong generative capability of deep neural networks, the gap between the base feature layer and enhancement layer is further filled with the feature level texture reconstruction.
To improve the efficiency of the proposed framework, the base layer neural network is trained in a multi-task manner.
arXiv Detail & Related papers (2020-04-21T14:32:49Z) - Guidance and Evaluation: Semantic-Aware Image Inpainting for Mixed
Scenes [54.836331922449666]
We propose a Semantic Guidance and Evaluation Network (SGE-Net) to update the structural priors and the inpainted image.
It utilizes semantic segmentation map as guidance in each scale of inpainting, under which location-dependent inferences are re-evaluated.
Experiments on real-world images of mixed scenes demonstrated the superiority of our proposed method over state-of-the-art approaches.
arXiv Detail & Related papers (2020-03-15T17:49:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.