MIcro-Surgical Anastomose Workflow recognition challenge report
- URL: http://arxiv.org/abs/2103.13111v1
- Date: Wed, 24 Mar 2021 11:34:09 GMT
- Title: MIcro-Surgical Anastomose Workflow recognition challenge report
- Authors: Arnaud Huaulm\'e, Duygu Sarikaya, K\'evin Le Mut, Fabien Despinoy,
Yonghao Long, Qi Dou, Chin-Boon Chng, Wenjun Lin, Satoshi Kondo, Laura
Bravo-S\'anchez, Pablo Arbel\'aez, Wolfgang Reiter, Manoru Mitsuishi, Kanako
Harada, Pierre Jannin
- Abstract summary: "MIcro-Surgical Anastomose recognition on training sessions" (MISAW) challenge provided a data set of 27 sequences of micro-surgical anastomosis on artificial blood vessels.
This data set was composed of videos, kinematics, and workflow annotations described at three different granularity levels: phase, step, and activity.
The best models achieved more than 95% AD-Accuracy for phase recognition, 80% for step recognition, 60% for activity recognition, and 75% for all granularity levels.
- Score: 12.252332806968756
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The "MIcro-Surgical Anastomose Workflow recognition on training sessions"
(MISAW) challenge provided a data set of 27 sequences of micro-surgical
anastomosis on artificial blood vessels. This data set was composed of videos,
kinematics, and workflow annotations described at three different granularity
levels: phase, step, and activity. The participants were given the option to
use kinematic data and videos to develop workflow recognition models. Four
tasks were proposed to the participants: three of them were related to the
recognition of surgical workflow at three different granularity levels, while
the last one addressed the recognition of all granularity levels in the same
model. One ranking was made for each task. We used the average
application-dependent balanced accuracy (AD-Accuracy) as the evaluation metric.
This takes unbalanced classes into account and it is more clinically relevant
than a frame-by-frame score. Six teams, including a non-competing team,
participated in at least one task. All models employed deep learning models,
such as CNN or RNN. The best models achieved more than 95% AD-Accuracy for
phase recognition, 80% for step recognition, 60% for activity recognition, and
75% for all granularity levels. For high levels of granularity (i.e., phases
and steps), the best models had a recognition rate that may be sufficient for
applications such as prediction of remaining surgical time or resource
management. However, for activities, the recognition rate was still low for
applications that can be employed clinically. The MISAW data set is publicly
available to encourage further research in surgical workflow recognition. It
can be found at www.synapse.org/MISAW
Related papers
- The effect of data augmentation and 3D-CNN depth on Alzheimer's Disease
detection [51.697248252191265]
This work summarizes and strictly observes best practices regarding data handling, experimental design, and model evaluation.
We focus on Alzheimer's Disease (AD) detection, which serves as a paradigmatic example of challenging problem in healthcare.
Within this framework, we train predictive 15 models, considering three different data augmentation strategies and five distinct 3D CNN architectures.
arXiv Detail & Related papers (2023-09-13T10:40:41Z) - NEVIS'22: A Stream of 100 Tasks Sampled from 30 Years of Computer Vision
Research [96.53307645791179]
We introduce the Never-Ending VIsual-classification Stream (NEVIS'22), a benchmark consisting of a stream of over 100 visual classification tasks.
Despite being limited to classification, the resulting stream has a rich diversity of tasks from OCR, to texture analysis, scene recognition, and so forth.
Overall, NEVIS'22 poses an unprecedented challenge for current sequential learning approaches due to the scale and diversity of tasks.
arXiv Detail & Related papers (2022-11-15T18:57:46Z) - Dissecting Self-Supervised Learning Methods for Surgical Computer Vision [51.370873913181605]
Self-Supervised Learning (SSL) methods have begun to gain traction in the general computer vision community.
The effectiveness of SSL methods in more complex and impactful domains, such as medicine and surgery, remains limited and unexplored.
We present an extensive analysis of the performance of these methods on the Cholec80 dataset for two fundamental and popular tasks in surgical context understanding, phase recognition and tool presence detection.
arXiv Detail & Related papers (2022-07-01T14:17:11Z) - CholecTriplet2021: A benchmark challenge for surgical action triplet
recognition [66.51610049869393]
This paper presents CholecTriplet 2021: an endoscopic vision challenge organized at MICCAI 2021 for the recognition of surgical action triplets in laparoscopic videos.
We present the challenge setup and assessment of the state-of-the-art deep learning methods proposed by the participants during the challenge.
A total of 4 baseline methods and 19 new deep learning algorithms are presented to recognize surgical action triplets directly from surgical videos, achieving mean average precision (mAP) ranging from 4.2% to 38.1%.
arXiv Detail & Related papers (2022-04-10T18:51:55Z) - PEg TRAnsfer Workflow recognition challenge report: Does multi-modal
data improve recognition? [14.144188912860892]
"PEg TRAnsfert recognition" (PETRAW) challenge was to develop surgical workflow recognition methods based on one or several modalities.
PETRAW challenge provided a data set of 150 peg transfer sequences performed on a virtual simulator.
The improvement between video/kinematic-based methods and the uni-modality ones was significant for all of the teams.
arXiv Detail & Related papers (2022-02-11T18:33:11Z) - Comparative Validation of Machine Learning Algorithms for Surgical
Workflow and Skill Analysis with the HeiChole Benchmark [36.37186411201134]
Surgical workflow and skill analysis are key technologies for the next generation of cognitive surgical assistance systems.
We investigated the generalizability of phase recognition algorithms in a multi-center setting.
arXiv Detail & Related papers (2021-09-30T09:34:13Z) - Active learning for medical code assignment [55.99831806138029]
We demonstrate the effectiveness of Active Learning (AL) in multi-label text classification in the clinical domain.
We apply a set of well-known AL methods to help automatically assign ICD-9 codes on the MIMIC-III dataset.
Our results show that the selection of informative instances provides satisfactory classification with a significantly reduced training set.
arXiv Detail & Related papers (2021-04-12T18:11:17Z) - Robust Medical Instrument Segmentation Challenge 2019 [56.148440125599905]
Intraoperative tracking of laparoscopic instruments is often a prerequisite for computer and robotic-assisted interventions.
Our challenge was based on a surgical data set comprising 10,040 annotated images acquired from a total of 30 surgical procedures.
The results confirm the initial hypothesis, namely that algorithm performance degrades with an increasing domain gap.
arXiv Detail & Related papers (2020-03-23T14:35:08Z) - Multi-Task Recurrent Neural Network for Surgical Gesture Recognition and
Progress Prediction [17.63619129438996]
We propose a multi-task recurrent neural network for simultaneous recognition of surgical gestures and estimation of a novel formulation of surgical task progress.
We demonstrate that recognition performance improves in multi-task frameworks with progress estimation without any additional manual labelling and training.
arXiv Detail & Related papers (2020-03-10T14:28:02Z) - Temporal Segmentation of Surgical Sub-tasks through Deep Learning with
Multiple Data Sources [14.677001578868872]
We propose a unified surgical state estimation model based on the actions performed or events occurred as the task progresses.
We evaluate our model on the JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS) and a more complex dataset involving robotic intra-operative ultrasound (RIOUS) imaging.
Our model achieves a superior frame-wise state estimation accuracy up to 89.4%, which improves the state-of-the-art surgical state estimation models.
arXiv Detail & Related papers (2020-02-07T17:49:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.