TopoMortar: A dataset to evaluate image segmentation methods focused on topology accuracy
- URL: http://arxiv.org/abs/2503.03365v1
- Date: Wed, 05 Mar 2025 10:42:41 GMT
- Title: TopoMortar: A dataset to evaluate image segmentation methods focused on topology accuracy
- Authors: Juan Miguel Valverde, Motoya Koga, Nijihiko Otsuka, Anders Bjorholm Dahl,
- Abstract summary: TopoMortar is the first dataset specifically designed to evaluate topology-focused image segmentation methods.<n>We show that clDice achieved the most topologically accurate segmentations on TopoMortar.<n>We also show that simple methods, such as data augmentation and self-distillation, can elevate Cross entropy Dice loss to surpass most topology loss functions.
- Score: 0.5892638927736115
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present TopoMortar, a brick wall dataset that is the first dataset specifically designed to evaluate topology-focused image segmentation methods, such as topology loss functions. TopoMortar enables to investigate in two ways whether methods incorporate prior topological knowledge. First, by eliminating challenges seen in real-world data, such as small training set, noisy labels, and out-of-distribution test-set images, that, as we show, impact the effectiveness of topology losses. Second, by allowing to assess in the same dataset topology accuracy across dataset challenges, isolating dataset-related effects from the effect of incorporating prior topological knowledge. In these two experiments, it is deliberately difficult to improve topology accuracy without actually using topology information, thus, permitting to attribute an improvement in topology accuracy to the incorporation of prior topological knowledge. To this end, TopoMortar includes three types of labels (accurate, noisy, pseudo-labels), two fixed training sets (large and small), and in-distribution and out-of-distribution test-set images. We compared eight loss functions on TopoMortar, and we found that clDice achieved the most topologically accurate segmentations, Skeleton Recall loss performed best particularly with noisy labels, and the relative advantageousness of the other loss functions depended on the experimental setting. Additionally, we show that simple methods, such as data augmentation and self-distillation, can elevate Cross entropy Dice loss to surpass most topology loss functions, and that those simple methods can enhance topology loss functions as well. clDice and Skeleton Recall loss, both skeletonization-based loss functions, were also the fastest to train, making this type of loss function a promising research direction. TopoMortar and our code can be found at https://github.com/jmlipman/TopoMortar
Related papers
- Topograph: An efficient Graph-Based Framework for Strictly Topology Preserving Image Segmentation [78.54656076915565]
Topological correctness plays a critical role in many image segmentation tasks.
Most networks are trained using pixel-wise loss functions, such as Dice, neglecting topological accuracy.
We propose a novel, graph-based framework for topologically accurate image segmentation.
arXiv Detail & Related papers (2024-11-05T16:20:14Z) - Robust Loss Functions for Object Grasping under Limited Ground Truth [3.794161613920474]
We deal with missing or noisy ground truth while training the convolutional neural network.
For missing ground truth, a new predicted category probability method is defined for unlabeled samples.
For noisy ground truth, a symmetric loss function is introduced to resist the corruption of label noises.
arXiv Detail & Related papers (2024-09-09T15:56:34Z) - Enhancing Boundary Segmentation for Topological Accuracy with Skeleton-based Methods [7.646983689651424]
Topological consistency plays a crucial role in the task of boundary segmentation for reticular images.
We propose the Skea-Topo Aware loss, which is a novel loss function that takes into account the shape of each object and topological significance of the pixels.
Experiments prove that our method improves topological consistency by up to 7 points in VI compared to 13 state-of-art methods.
arXiv Detail & Related papers (2024-04-29T09:27:31Z) - Enhancing Noise-Robust Losses for Large-Scale Noisy Data Learning [0.0]
Large annotated datasets inevitably contain noisy labels, which poses a major challenge for training deep neural networks as they easily memorize the labels.<n>Noise-robust loss functions have emerged as a notable strategy to counteract this issue, but it remains challenging to create a robust loss function which is not susceptible to underfitting.<n>We propose a novel method denoted as logit bias, which adds a real number $epsilon$ to the logit at the position of the correct class.
arXiv Detail & Related papers (2023-06-08T18:38:55Z) - Minimizing the Accumulated Trajectory Error to Improve Dataset
Distillation [151.70234052015948]
We propose a novel approach that encourages the optimization algorithm to seek a flat trajectory.
We show that the weights trained on synthetic data are robust against the accumulated errors perturbations with the regularization towards the flat trajectory.
Our method, called Flat Trajectory Distillation (FTD), is shown to boost the performance of gradient-matching methods by up to 4.7%.
arXiv Detail & Related papers (2022-11-20T15:49:11Z) - BuyTheDips: PathLoss for improved topology-preserving deep
learning-based image segmentation [1.8899300124593648]
We propose a new deep image segmentation method which relies on a new leakage loss: the Pathloss.
Our method outperforms state-of-the-art topology-aware methods on two representative datasets of different natures.
arXiv Detail & Related papers (2022-07-23T07:19:30Z) - Image Segmentation with Homotopy Warping [10.093435601073484]
topological correctness is crucial for the segmentation of images with fine-scale structures.
By leveraging the theory of digital topology, we identify locations in an image that are critical for topology.
We propose a new homotopy warping loss to train deep image segmentation networks for better topological accuracy.
arXiv Detail & Related papers (2021-12-15T00:33:15Z) - Searching for Robustness: Loss Learning for Noisy Classification Tasks [81.70914107917551]
We parameterize a flexible family of loss functions using Taylors and apply evolutionary strategies to search for noise-robust losses in this space.
The resulting white-box loss provides a simple and fast "plug-and-play" module that enables effective noise-robust learning in diverse downstream tasks.
arXiv Detail & Related papers (2021-02-27T15:27:22Z) - Loss Function Discovery for Object Detection via Convergence-Simulation
Driven Search [101.73248560009124]
We propose an effective convergence-simulation driven evolutionary search algorithm, CSE-Autoloss, for speeding up the search progress.
We conduct extensive evaluations of loss function search on popular detectors and validate the good generalization capability of searched losses.
Our experiments show that the best-discovered loss function combinations outperform default combinations by 1.1% and 0.8% in terms of mAP for two-stage and one-stage detectors.
arXiv Detail & Related papers (2021-02-09T08:34:52Z) - Topological obstructions in neural networks learning [67.8848058842671]
We study global properties of the loss gradient function flow.
We use topological data analysis of the loss function and its Morse complex to relate local behavior along gradient trajectories with global properties of the loss surface.
arXiv Detail & Related papers (2020-12-31T18:53:25Z) - Self-Learning with Rectification Strategy for Human Parsing [73.06197841003048]
We propose a trainable graph reasoning method to correct two typical errors in the pseudo-labels.
The reconstructed features have a stronger ability to represent the topology structure of the human body.
Our method outperforms other state-of-the-art methods in supervised human parsing tasks.
arXiv Detail & Related papers (2020-04-17T03:51:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.