DenseSeg: Joint Learning for Semantic Segmentation and Landmark Detection Using Dense Image-to-Shape Representation
- URL: http://arxiv.org/abs/2405.19746v1
- Date: Thu, 30 May 2024 06:49:59 GMT
- Title: DenseSeg: Joint Learning for Semantic Segmentation and Landmark Detection Using Dense Image-to-Shape Representation
- Authors: Ron Keuth, Lasse Hansen, Maren Balks, Ronja Jäger, Anne-Nele Schröder, Ludger Tüshaus, Mattias Heinrich,
- Abstract summary: We propose a dense image-to-shape representation that enables the joint learning of landmarks and semantic segmentation.
Our method intuitively allows the extraction of arbitrary landmarks due to its representation of anatomical correspondences.
- Score: 1.342749532731493
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Purpose: Semantic segmentation and landmark detection are fundamental tasks of medical image processing, facilitating further analysis of anatomical objects. Although deep learning-based pixel-wise classification has set a new-state-of-the-art for segmentation, it falls short in landmark detection, a strength of shape-based approaches. Methods: In this work, we propose a dense image-to-shape representation that enables the joint learning of landmarks and semantic segmentation by employing a fully convolutional architecture. Our method intuitively allows the extraction of arbitrary landmarks due to its representation of anatomical correspondences. We benchmark our method against the state-of-the-art for semantic segmentation (nnUNet), a shape-based approach employing geometric deep learning and a CNN-based method for landmark detection. Results: We evaluate our method on two medical dataset: one common benchmark featuring the lungs, heart, and clavicle from thorax X-rays, and another with 17 different bones in the paediatric wrist. While our method is on pair with the landmark detection baseline in the thorax setting (error in mm of $2.6\pm0.9$ vs $2.7\pm0.9$), it substantially surpassed it in the more complex wrist setting ($1.1\pm0.6$ vs $1.9\pm0.5$). Conclusion: We demonstrate that dense geometric shape representation is beneficial for challenging landmark detection tasks and outperforms previous state-of-the-art using heatmap regression. While it does not require explicit training on the landmarks themselves, allowing for the addition of new landmarks without necessitating retraining.}
Related papers
- Train-Free Segmentation in MRI with Cubical Persistent Homology [0.0]
We describe a new general method for segmentation in MRI scans using Topological Data Analysis (TDA)
It works in three steps, first identifying the whole object to segment via automatic thresholding, then detecting a distinctive subset whose topology is known in advance, and finally deducing the various components of the segmentation.
We study the examples of glioblastoma segmentation in brain MRI, where a sphere is to be detected, as well as myocardium in cardiac MRI, involving a cylinder, and cortical plate detection in fetal brain MRI, whose 2D slices are circles.
arXiv Detail & Related papers (2024-01-02T11:43:49Z) - Shape Matters: Detecting Vertebral Fractures Using Differentiable
Point-Based Shape Decoding [51.38395069380457]
Degenerative spinal pathologies are highly prevalent among the elderly population.
Timely diagnosis of osteoporotic fractures and other degenerative deformities facilitates proactive measures to mitigate the risk of severe back pain and disability.
In this study, we specifically explore the use of shape auto-encoders for vertebrae.
arXiv Detail & Related papers (2023-12-08T18:11:22Z) - UAE: Universal Anatomical Embedding on Multi-modality Medical Images [7.589247017940839]
We propose universal anatomical embedding (UAE) to learn appearance, semantic, and cross-modality anatomical embeddings.
UAE incorporates three key innovations: (1) semantic embedding learning with prototypical contrastive loss; (2) a fixed-point-based matching strategy; and (3) an iterative approach for cross-modality embedding learning.
Our results suggest that UAE outperforms state-of-the-art methods, offering a robust and versatile approach for landmark based medical image analysis tasks.
arXiv Detail & Related papers (2023-11-25T20:01:20Z) - Weakly supervised segmentation with point annotations for histopathology
images via contrast-based variational model [7.021021047695508]
We propose a contrast-based variational model to generate segmentation results for histopathology images.
The proposed method considers the common characteristics of target regions in histopathology images and can be trained in an end-to-end manner.
It can generate more regionally consistent and smoother boundary segmentation, and is more robust to unlabeled novel' regions.
arXiv Detail & Related papers (2023-04-07T10:12:21Z) - Self-Supervised Correction Learning for Semi-Supervised Biomedical Image
Segmentation [84.58210297703714]
We propose a self-supervised correction learning paradigm for semi-supervised biomedical image segmentation.
We design a dual-task network, including a shared encoder and two independent decoders for segmentation and lesion region inpainting.
Experiments on three medical image segmentation datasets for different tasks demonstrate the outstanding performance of our method.
arXiv Detail & Related papers (2023-01-12T08:19:46Z) - Learning of Inter-Label Geometric Relationships Using Self-Supervised
Learning: Application To Gleason Grade Segmentation [4.898744396854313]
We propose a method to synthesize for PCa histopathology images by learning the geometrical relationship between different disease labels.
We use a weakly supervised segmentation approach that uses Gleason score to segment the diseased regions.
The resulting segmentation map is used to train a Shape Restoration Network (ShaRe-Net) to predict missing mask segments.
arXiv Detail & Related papers (2021-10-01T13:47:07Z) - Weakly Supervised Deep Nuclei Segmentation Using Partial Points
Annotation in Histopathology Images [51.893494939675314]
We propose a novel weakly supervised segmentation framework based on partial points annotation.
We show that our method can achieve competitive performance compared to the fully supervised counterpart and the state-of-the-art methods.
arXiv Detail & Related papers (2020-07-10T15:41:29Z) - Weakly-Supervised Segmentation for Disease Localization in Chest X-Ray
Images [0.0]
We propose a novel approach to the semantic segmentation of medical chest X-ray images with only image-level class labels as supervision.
We show that this approach is applicable to chest X-rays for detecting an anomalous volume of air between the lung and the chest wall.
arXiv Detail & Related papers (2020-07-01T20:48:35Z) - Structured Landmark Detection via Topology-Adapting Deep Graph Learning [75.20602712947016]
We present a new topology-adapting deep graph learning approach for accurate anatomical facial and medical landmark detection.
The proposed method constructs graph signals leveraging both local image features and global shape features.
Experiments are conducted on three public facial image datasets (WFLW, 300W, and COFW-68) as well as three real-world X-ray medical datasets (Cephalometric (public), Hand and Pelvis)
arXiv Detail & Related papers (2020-04-17T11:55:03Z) - Unsupervised Learning of Landmarks based on Inter-Intra Subject
Consistencies [72.67344725725961]
We present a novel unsupervised learning approach to image landmark discovery by incorporating the inter-subject landmark consistencies on facial images.
This is achieved via an inter-subject mapping module that transforms original subject landmarks based on an auxiliary subject-related structure.
To recover from the transformed images back to the original subject, the landmark detector is forced to learn spatial locations that contain the consistent semantic meanings both for the paired intra-subject images and between the paired inter-subject images.
arXiv Detail & Related papers (2020-04-16T20:38:16Z) - Vertebra-Focused Landmark Detection for Scoliosis Assessment [54.24477530836629]
We propose a novel vertebra-focused landmark detection method.
Our model first localizes the vertebra centers, based on which it then traces the four corner landmarks of the vertebra through the learned corner offset.
Results demonstrate the merits of our method in both Cobb angle measurement and landmark detection on low-contrast and ambiguous X-ray images.
arXiv Detail & Related papers (2020-01-09T19:17:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.