MONAI Label: A framework for AI-assisted Interactive Labeling of 3D
Medical Images
- URL: http://arxiv.org/abs/2203.12362v2
- Date: Fri, 28 Apr 2023 22:42:45 GMT
- Title: MONAI Label: A framework for AI-assisted Interactive Labeling of 3D
Medical Images
- Authors: Andres Diaz-Pinto, Sachidanand Alle, Vishwesh Nath, Yucheng Tang,
Alvin Ihsani, Muhammad Asad, Fernando P\'erez-Garc\'ia, Pritesh Mehta, Wenqi
Li, Mona Flores, Holger R. Roth, Tom Vercauteren, Daguang Xu, Prerna Dogra,
Sebastien Ourselin, Andrew Feng and M. Jorge Cardoso
- Abstract summary: The lack of annotated datasets is a major bottleneck for training new task-specific supervised machine learning models.
We present MONAI Label, a free and open-source framework that facilitates the development of applications based on artificial intelligence (AI) models.
- Score: 49.664220687980006
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The lack of annotated datasets is a major bottleneck for training new
task-specific supervised machine learning models, considering that manual
annotation is extremely expensive and time-consuming. To address this problem,
we present MONAI Label, a free and open-source framework that facilitates the
development of applications based on artificial intelligence (AI) models that
aim at reducing the time required to annotate radiology datasets. Through MONAI
Label, researchers can develop AI annotation applications focusing on their
domain of expertise. It allows researchers to readily deploy their apps as
services, which can be made available to clinicians via their preferred user
interface. Currently, MONAI Label readily supports locally installed (3D
Slicer) and web-based (OHIF) frontends and offers two active learning
strategies to facilitate and speed up the training of segmentation algorithms.
MONAI Label allows researchers to make incremental improvements to their
AI-based annotation application by making them available to other researchers
and clinicians alike. Additionally, MONAI Label provides sample AI-based
interactive and non-interactive labeling applications, that can be used
directly off the shelf, as plug-and-play to any given dataset. Significant
reduced annotation times using the interactive model can be observed on two
public datasets.
Related papers
- LAMM: Language-Assisted Multi-Modal Instruction-Tuning Dataset,
Framework, and Benchmark [81.42376626294812]
We present Language-Assisted Multi-Modal instruction tuning dataset, framework, and benchmark.
Our aim is to establish LAMM as a growing ecosystem for training and evaluating MLLMs.
We present a comprehensive dataset and benchmark, which cover a wide range of vision tasks for 2D and 3D vision.
arXiv Detail & Related papers (2023-06-11T14:01:17Z) - Few Shot Rationale Generation using Self-Training with Dual Teachers [4.91890875296663]
Self-rationalizing models that also generate a free-text explanation for their predicted labels are an important tool to build trustworthy AI applications.
We introduce a novel dual-teacher learning framework, which learns two specialized teacher models for task prediction and rationalization.
We formulate a new loss function, Masked Label Regularization (MLR) which promotes explanations to be strongly conditioned on predicted labels.
arXiv Detail & Related papers (2023-06-05T23:57:52Z) - Exploring a Gradient-based Explainable AI Technique for Time-Series
Data: A Case Study of Assessing Stroke Rehabilitation Exercises [5.381004207943597]
We describe a threshold-based method that utilizes a weakly supervised model and a gradient-based explainable AI technique.
Our results demonstrated the potential of a gradient-based explainable AI technique for time-series data.
arXiv Detail & Related papers (2023-05-08T08:30:05Z) - MEGAnno: Exploratory Labeling for NLP in Computational Notebooks [9.462926987075122]
We present MEGAnno, a novel annotation framework designed for NLP practitioners and researchers.
With MEGAnno, users can explore data through sophisticated search and interactive suggestion functions.
We demonstrate MEGAnno's flexible, exploratory, efficient, and seamless labeling experience through a sentiment analysis use case.
arXiv Detail & Related papers (2023-01-08T19:16:22Z) - TagLab: A human-centric AI system for interactive semantic segmentation [63.84619323110687]
TagLab is an open-source AI-assisted software for annotating large orthoimages.
It speeds up image annotation from scratch through assisted tools, creates custom fully automatic semantic segmentation models, and allows the quick edits of automatic predictions.
We report our results in two different scenarios, marine ecology, and architectural heritage.
arXiv Detail & Related papers (2021-12-23T16:50:06Z) - Unsupervised Domain Adaptive Learning via Synthetic Data for Person
Re-identification [101.1886788396803]
Person re-identification (re-ID) has gained more and more attention due to its widespread applications in video surveillance.
Unfortunately, the mainstream deep learning methods still need a large quantity of labeled data to train models.
In this paper, we develop a data collector to automatically generate synthetic re-ID samples in a computer game, and construct a data labeler to simultaneously annotate them.
arXiv Detail & Related papers (2021-09-12T15:51:41Z) - TagRuler: Interactive Tool for Span-Level Data Programming by
Demonstration [1.4050836886292872]
Data programming was only accessible to users who knew how to program.
We build a novel tool, TagRuler, that makes it easy for annotators to build span-level labeling functions without programming.
arXiv Detail & Related papers (2021-06-24T04:49:42Z) - Knowledge-Guided Multi-Label Few-Shot Learning for General Image
Recognition [75.44233392355711]
KGGR framework exploits prior knowledge of statistical label correlations with deep neural networks.
It first builds a structured knowledge graph to correlate different labels based on statistical label co-occurrence.
Then, it introduces the label semantics to guide learning semantic-specific features.
It exploits a graph propagation network to explore graph node interactions.
arXiv Detail & Related papers (2020-09-20T15:05:29Z) - Unsupervised Multi-Modal Representation Learning for Affective Computing
with Multi-Corpus Wearable Data [16.457778420360537]
We propose an unsupervised framework to reduce the reliance on human supervision.
The proposed framework utilizes two stacked convolutional autoencoders to learn latent representations from wearable electrocardiogram (ECG) and electrodermal activity (EDA) signals.
Our method outperforms current state-of-the-art results that have performed arousal detection on the same datasets.
arXiv Detail & Related papers (2020-08-24T22:01:55Z) - Dual-Teacher: Integrating Intra-domain and Inter-domain Teachers for
Annotation-efficient Cardiac Segmentation [65.81546955181781]
We propose a novel semi-supervised domain adaptation approach, namely Dual-Teacher.
The student model learns the knowledge of unlabeled target data and labeled source data by two teacher models.
We demonstrate that our approach is able to concurrently utilize unlabeled data and cross-modality data with superior performance.
arXiv Detail & Related papers (2020-07-13T10:00:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.