On-the-Fly Point Annotation for Fast Medical Video Labeling
- URL: http://arxiv.org/abs/2404.14344v1
- Date: Mon, 22 Apr 2024 16:59:43 GMT
- Title: On-the-Fly Point Annotation for Fast Medical Video Labeling
- Authors: Meyer Adrien, Mazellier Jean-Paul, Jeremy Dana, Nicolas Padoy,
- Abstract summary: In medical research, deep learning models rely on high-quality annotated data.
The need to adjust two corners makes the process inherently frame-by-frame.
We propose an on-the-fly method for live video annotation to enhance the annotation efficiency.
- Score: 1.890063512530524
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Purpose: In medical research, deep learning models rely on high-quality annotated data, a process often laborious and timeconsuming. This is particularly true for detection tasks where bounding box annotations are required. The need to adjust two corners makes the process inherently frame-by-frame. Given the scarcity of experts' time, efficient annotation methods suitable for clinicians are needed. Methods: We propose an on-the-fly method for live video annotation to enhance the annotation efficiency. In this approach, a continuous single-point annotation is maintained by keeping the cursor on the object in a live video, mitigating the need for tedious pausing and repetitive navigation inherent in traditional annotation methods. This novel annotation paradigm inherits the point annotation's ability to generate pseudo-labels using a point-to-box teacher model. We empirically evaluate this approach by developing a dataset and comparing on-the-fly annotation time against traditional annotation method. Results: Using our method, annotation speed was 3.2x faster than the traditional annotation technique. We achieved a mean improvement of 6.51 +- 0.98 AP@50 over conventional method at equivalent annotation budgets on the developed dataset. Conclusion: Without bells and whistles, our approach offers a significant speed-up in annotation tasks. It can be easily implemented on any annotation platform to accelerate the integration of deep learning in video-based medical research.
Related papers
- Learning Tracking Representations from Single Point Annotations [49.47550029470299]
We propose to learn tracking representations from single point annotations in a weakly supervised manner.
Specifically, we propose a soft contrastive learning framework that incorporates target objectness prior to end-to-end contrastive learning.
arXiv Detail & Related papers (2024-04-15T06:50:58Z) - How to Efficiently Annotate Images for Best-Performing Deep Learning
Based Segmentation Models: An Empirical Study with Weak and Noisy Annotations
and Segment Anything Model [18.293057751504122]
Deep neural networks (DNNs) have been deployed for many image segmentation tasks and achieved outstanding performance.
preparing a dataset for training segmentations is laborious and costly since typically pixel-level annotations are provided for each object of interest.
To alleviate this issue, one can provide only weak labels such as bounding boxes or scribbles, or less accurate (noisy) annotations of the objects.
arXiv Detail & Related papers (2023-12-17T04:26:42Z) - Accelerated Video Annotation driven by Deep Detector and Tracker [12.640283469603355]
Annotating object ground truth in videos is vital for several downstream tasks in robot perception and machine learning.
The accuracy of the annotated instances of the moving objects on every image frame in a video is crucially important.
We propose a new annotation method which leverages a combination of a learning-based detector and a learning-based tracker.
arXiv Detail & Related papers (2023-02-19T15:16:05Z) - Annotation Error Detection: Analyzing the Past and Present for a More
Coherent Future [63.99570204416711]
We reimplement 18 methods for detecting potential annotation errors and evaluate them on 9 English datasets.
We define a uniform evaluation setup including a new formalization of the annotation error detection task.
We release our datasets and implementations in an easy-to-use and open source software package.
arXiv Detail & Related papers (2022-06-05T22:31:45Z) - Point-Teaching: Weakly Semi-Supervised Object Detection with Point
Annotations [81.02347863372364]
We present Point-Teaching, a weakly semi-supervised object detection framework.
Specifically, we propose a Hungarian-based point matching method to generate pseudo labels for point annotated images.
We propose a simple-yet-effective data augmentation, termed point-guided copy-paste, to reduce the impact of the unmatched points.
arXiv Detail & Related papers (2022-06-01T07:04:38Z) - A Positive/Unlabeled Approach for the Segmentation of Medical Sequences
using Point-Wise Supervision [3.883460584034766]
We propose a new method to efficiently segment medical imaging volumes or videos using point-wise annotations only.
Our approach trains a deep learning model using an appropriate Positive/Unlabeled objective function using point-wise annotations.
We show experimentally that our approach outperforms state-of-the-art methods tailored to the same problem.
arXiv Detail & Related papers (2021-07-18T09:13:33Z) - Annotation Curricula to Implicitly Train Non-Expert Annotators [56.67768938052715]
voluntary studies often require annotators to familiarize themselves with the task, its annotation scheme, and the data domain.
This can be overwhelming in the beginning, mentally taxing, and induce errors into the resulting annotations.
We propose annotation curricula, a novel approach to implicitly train annotators.
arXiv Detail & Related papers (2021-06-04T09:48:28Z) - Efficient video annotation with visual interpolation and frame selection
guidance [0.0]
We introduce a unified framework for generic video annotation with bounding boxes.
We show that our approach reduces actual measured annotation time by 50% compared to commonly used linear methods.
arXiv Detail & Related papers (2020-12-23T09:31:40Z) - Active Learning for Coreference Resolution using Discrete Annotation [76.36423696634584]
We improve upon pairwise annotation for active learning in coreference resolution.
We ask annotators to identify mention antecedents if a presented mention pair is deemed not coreferent.
In experiments with existing benchmark coreference datasets, we show that the signal from this additional question leads to significant performance gains per human-annotation hour.
arXiv Detail & Related papers (2020-04-28T17:17:11Z) - Confident Coreset for Active Learning in Medical Image Analysis [57.436224561482966]
We propose a novel active learning method, confident coreset, which considers both uncertainty and distribution for effectively selecting informative samples.
By comparative experiments on two medical image analysis tasks, we show that our method outperforms other active learning methods.
arXiv Detail & Related papers (2020-04-05T13:46:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.