Seg-metrics: a Python package to compute segmentation metrics
- URL: http://arxiv.org/abs/2403.07884v1
- Date: Fri, 12 Jan 2024 16:30:54 GMT
- Title: Seg-metrics: a Python package to compute segmentation metrics
- Authors: Jingnan Jia, Marius Staring, Berend C. Stoel,
- Abstract summary: textttseg-metrics is an open-source Python package for standardized MIS model evaluation.
textttseg-metrics supports multiple file formats and is easily installable through the Python Package Index (PyPI)
- Score: 0.6827423171182151
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In response to a concerning trend of selectively emphasizing metrics in medical image segmentation (MIS) studies, we introduce \texttt{seg-metrics}, an open-source Python package for standardized MIS model evaluation. Unlike existing packages, \texttt{seg-metrics} offers user-friendly interfaces for various overlap-based and distance-based metrics, providing a comprehensive solution. \texttt{seg-metrics} supports multiple file formats and is easily installable through the Python Package Index (PyPI). With a focus on speed and convenience, \texttt{seg-metrics} stands as a valuable tool for efficient MIS model assessment.
Related papers
- We Need to Talk About Classification Evaluation Metrics in NLP [34.73017509294468]
In Natural Language Processing (NLP) model generalizability is generally measured with standard metrics such as Accuracy, F-Measure, or AUC-ROC.
The diversity of metrics, and the arbitrariness of their application suggest that there is no agreement within NLP on a single best metric to use.
We demonstrate that a random-guess normalised Informedness metric is a parsimonious baseline for task performance.
arXiv Detail & Related papers (2024-01-08T11:40:48Z) - Panoptica -- instance-wise evaluation of 3D semantic and instance
segmentation maps [9.140078680017046]
panoptica is a versatile and performance-optimized package for computing instance-wise segmentation quality metrics from 2D and 3D segmentation maps.
panoptica is open-source, implemented in Python, and accompanied by comprehensive documentation and tutorials.
arXiv Detail & Related papers (2023-12-05T09:34:56Z) - SMART: Sentences as Basic Units for Text Evaluation [48.5999587529085]
In this paper, we introduce a new metric called SMART to mitigate such limitations.
We treat sentences as basic units of matching instead of tokens, and use a sentence matching function to soft-match candidate and reference sentences.
Our results show that system-level correlations of our proposed metric with a model-based matching function outperforms all competing metrics.
arXiv Detail & Related papers (2022-08-01T17:58:05Z) - DADApy: Distance-based Analysis of DAta-manifolds in Python [51.37841707191944]
DADApy is a python software package for analysing and characterising high-dimensional data.
It provides methods for estimating the intrinsic dimension and the probability density, for performing density-based clustering and for comparing different distance metrics.
arXiv Detail & Related papers (2022-05-04T08:41:59Z) - MISeval: a Metric Library for Medical Image Segmentation Evaluation [1.4680035572775534]
There is no universal metric library in Python for standardized and reproducible evaluation.
We propose our open-source publicly available Python package MISeval: a metric library for Medical Image Evaluation.
arXiv Detail & Related papers (2022-01-23T23:06:47Z) - Mlr3spatiotempcv: Spatiotemporal resampling methods for machine learning
in R [63.26453219947887]
This package integrates the proglangR package directly into the mlr3 machine-learning framework.
One advantage is the use of a consistent recommendations in an overarching machine-learning toolkit.
arXiv Detail & Related papers (2021-10-25T06:48:29Z) - Scikit-dimension: a Python package for intrinsic dimension estimation [58.8599521537]
This technical note introduces textttscikit-dimension, an open-source Python package for intrinsic dimension estimation.
textttscikit-dimension package provides a uniform implementation of most of the known ID estimators based on scikit-learn application programming interface.
We briefly describe the package and demonstrate its use in a large-scale (more than 500 datasets) benchmarking of methods for ID estimation in real-life and synthetic data.
arXiv Detail & Related papers (2021-09-06T16:46:38Z) - QuaPy: A Python-Based Framework for Quantification [76.22817970624875]
QuaPy is an open-source framework for performing quantification (a.k.a. supervised prevalence estimation)
It is written in Python and can be installed via pip.
arXiv Detail & Related papers (2021-06-18T13:57:11Z) - MOGPTK: The Multi-Output Gaussian Process Toolkit [71.08576457371433]
We present MOGPTK, a Python package for multi-channel data modelling using Gaussian processes (GP)
The aim of this toolkit is to make multi-output GP (MOGP) models accessible to researchers, data scientists, and practitioners alike.
arXiv Detail & Related papers (2020-02-09T23:34:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.