Surgical Visual Domain Adaptation: Results from the MICCAI 2020
SurgVisDom Challenge
- URL: http://arxiv.org/abs/2102.13644v1
- Date: Fri, 26 Feb 2021 18:45:28 GMT
- Title: Surgical Visual Domain Adaptation: Results from the MICCAI 2020
SurgVisDom Challenge
- Authors: Aneeq Zia, Kiran Bhattacharyya, Xi Liu, Ziheng Wang, Satoshi Kondo,
Emanuele Colleoni, Beatrice van Amsterdam, Razeen Hussain, Raabid Hussain,
Lena Maier-Hein, Danail Stoyanov, Stefanie Speidel, and Anthony Jarc
- Abstract summary: This work seeks to explore the potential for visual domain adaptation in surgery to overcome data privacy concerns.
In particular, we propose to use video from virtual reality (VR) simulations of surgical exercises to develop algorithms to recognize tasks in a clinical-like setting.
We present the performance of the different approaches to solve visual domain adaptation developed by challenge participants.
- Score: 9.986124942784969
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Surgical data science is revolutionizing minimally invasive surgery by
enabling context-aware applications. However, many challenges exist around
surgical data (and health data, more generally) needed to develop context-aware
models. This work - presented as part of the Endoscopic Vision (EndoVis)
challenge at the Medical Image Computing and Computer Assisted Intervention
(MICCAI) 2020 conference - seeks to explore the potential for visual domain
adaptation in surgery to overcome data privacy concerns. In particular, we
propose to use video from virtual reality (VR) simulations of surgical
exercises in robotic-assisted surgery to develop algorithms to recognize tasks
in a clinical-like setting. We present the performance of the different
approaches to solve visual domain adaptation developed by challenge
participants. Our analysis shows that the presented models were unable to learn
meaningful motion based features form VR data alone, but did significantly
better when small amount of clinical-like data was also made available. Based
on these results, we discuss promising methods and further work to address the
problem of visual domain adaptation in surgical data science. We also release
the challenge dataset publicly at https://www.synapse.org/surgvisdom2020.
Related papers
- VISAGE: Video Synthesis using Action Graphs for Surgery [34.21344214645662]
We introduce the novel task of future video generation in laparoscopic surgery.
Our proposed method, VISAGE, leverages the power of action scene graphs to capture the sequential nature of laparoscopic procedures.
Results of our experiments demonstrate high-fidelity video generation for laparoscopy procedures.
arXiv Detail & Related papers (2024-10-23T10:28:17Z) - PitVis-2023 Challenge: Workflow Recognition in videos of Endoscopic Pituitary Surgery [46.2901962659261]
The Pituitary Vision (VisVis) 2023 Challenge tasks the community to step and instrument recognition in videos of endoscopic pituitary surgery.
This is a unique task when compared to other minimally invasive surgeries due to the smaller working space.
There were 18-s from 9-teams across 6-countries, using a variety of deep learning models.
arXiv Detail & Related papers (2024-09-02T11:38:06Z) - Realistic Data Generation for 6D Pose Estimation of Surgical Instruments [4.226502078427161]
6D pose estimation of surgical instruments is critical to enable the automatic execution of surgical maneuvers.
In household and industrial settings, synthetic data, generated with 3D computer graphics software, has been shown as an alternative to minimize annotation costs.
We propose an improved simulation environment for surgical robotics that enables the automatic generation of large and diverse datasets.
arXiv Detail & Related papers (2024-06-11T14:59:29Z) - Surgical tool classification and localization: results and methods from
the MICCAI 2022 SurgToolLoc challenge [69.91670788430162]
We present the results of the SurgLoc 2022 challenge.
The goal was to leverage tool presence data as weak labels for machine learning models trained to detect tools.
We conclude by discussing these results in the broader context of machine learning and surgical data science.
arXiv Detail & Related papers (2023-05-11T21:44:39Z) - Robotic Navigation Autonomy for Subretinal Injection via Intelligent
Real-Time Virtual iOCT Volume Slicing [88.99939660183881]
We propose a framework for autonomous robotic navigation for subretinal injection.
Our method consists of an instrument pose estimation method, an online registration between the robotic and the i OCT system, and trajectory planning tailored for navigation to an injection target.
Our experiments on ex-vivo porcine eyes demonstrate the precision and repeatability of the method.
arXiv Detail & Related papers (2023-01-17T21:41:21Z) - Rethinking Surgical Instrument Segmentation: A Background Image Can Be
All You Need [18.830738606514736]
Data scarcity and imbalance have heavily affected the model accuracy and limited the design and deployment of deep learning-based surgical applications.
We propose a one-to-many data generation solution that gets rid of the complicated and expensive process of data collection and annotation from robotic surgery.
Our empirical analysis suggests that without the high cost of data collection and annotation, we can achieve decent surgical instrument segmentation performance.
arXiv Detail & Related papers (2022-06-23T16:22:56Z) - Multimodal Semantic Scene Graphs for Holistic Modeling of Surgical
Procedures [70.69948035469467]
We take advantage of the latest computer vision methodologies for generating 3D graphs from camera views.
We then introduce the Multimodal Semantic Graph Scene (MSSG) which aims at providing unified symbolic and semantic representation of surgical procedures.
arXiv Detail & Related papers (2021-06-09T14:35:44Z) - The SARAS Endoscopic Surgeon Action Detection (ESAD) dataset: Challenges
and methods [15.833413083110903]
This paper presents ESAD, the first large-scale dataset designed to tackle the problem of surgeon action detection in endoscopic minimally invasive surgery.
The dataset provides bounding box annotation for 21 action classes on real endoscopic video frames captured during prostatectomy, and was used as the basis of a recent MIDL 2020 challenge.
arXiv Detail & Related papers (2021-04-07T15:11:51Z) - Relational Graph Learning on Visual and Kinematics Embeddings for
Accurate Gesture Recognition in Robotic Surgery [84.73764603474413]
We propose a novel online approach of multi-modal graph network (i.e., MRG-Net) to dynamically integrate visual and kinematics information.
The effectiveness of our method is demonstrated with state-of-the-art results on the public JIGSAWS dataset.
arXiv Detail & Related papers (2020-11-03T11:00:10Z) - Domain Shift in Computer Vision models for MRI data analysis: An
Overview [64.69150970967524]
Machine learning and computer vision methods are showing good performance in medical imagery analysis.
Yet only a few applications are now in clinical use.
Poor transferability of themodels to data from different sources or acquisition domains is one of the reasons for that.
arXiv Detail & Related papers (2020-10-14T16:34:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.