Simulation of Brain Resection for Cavity Segmentation Using
Self-Supervised and Semi-Supervised Learning
- URL: http://arxiv.org/abs/2006.15693v1
- Date: Sun, 28 Jun 2020 20:03:39 GMT
- Title: Simulation of Brain Resection for Cavity Segmentation Using
Self-Supervised and Semi-Supervised Learning
- Authors: Fernando P\'erez-Garc\'ia (1 and 2), Roman Rodionov (3 and 4), Ali
Alim-Marvasti (1, 3 and 4), Rachel Sparks (2), John S. Duncan (3 and 4), and
S\'ebastien Ourselin (2) ((1) Wellcome EPSRC Centre for Interventional and
Surgical Sciences (WEISS), University College London, (2) School of
Biomedical Engineering and Imaging Sciences (BMEIS), King's College London,
(3) Department of Clinical and Experimental Epilepsy, UCL Queen Square
Institute of Neurology, (4) National Hospital for Neurology and Neurosurgery,
Queen Square, London, UK)
- Abstract summary: Convolutional neural networks (CNNs) are the state-of-the-art image segmentation technique.
CNNs require large amounts of annotated data for training.
Self-supervised learning can be used to generate training instances from unlabeled data.
- Score: 36.121815158077446
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Resective surgery may be curative for drug-resistant focal epilepsy, but only
40% to 70% of patients achieve seizure freedom after surgery. Retrospective
quantitative analysis could elucidate patterns in resected structures and
patient outcomes to improve resective surgery. However, the resection cavity
must first be segmented on the postoperative MR image. Convolutional neural
networks (CNNs) are the state-of-the-art image segmentation technique, but
require large amounts of annotated data for training. Annotation of medical
images is a time-consuming process requiring highly-trained raters, and often
suffering from high inter-rater variability. Self-supervised learning can be
used to generate training instances from unlabeled data. We developed an
algorithm to simulate resections on preoperative MR images. We curated a new
dataset, EPISURG, comprising 431 postoperative and 269 preoperative MR images
from 431 patients who underwent resective surgery. In addition to EPISURG, we
used three public datasets comprising 1813 preoperative MR images for training.
We trained a 3D CNN on artificially resected images created on the fly during
training, using images from 1) EPISURG, 2) public datasets and 3) both. To
evaluate trained models, we calculate Dice score (DSC) between model
segmentations and 200 manual annotations performed by three human raters. The
model trained on data with manual annotations obtained a median (interquartile
range) DSC of 65.3 (30.6). The DSC of our best-performing model, trained with
no manual annotations, is 81.7 (14.2). For comparison, inter-rater agreement
between human annotators was 84.0 (9.9). We demonstrate a training method for
CNNs using simulated resection cavities that can accurately segment real
resection cavities, without manual annotations.
Related papers
- Topology and Intersection-Union Constrained Loss Function for Multi-Region Anatomical Segmentation in Ocular Images [5.628938375586146]
Ocular Myasthenia Gravis (OMG) is a rare and challenging disease to detect in its early stages.
No publicly available dataset and tools currently exist for this purpose.
We propose a new topology and intersection-union constrained loss function (TIU loss) that improves performance using small training datasets.
arXiv Detail & Related papers (2024-11-01T13:17:18Z) - Brain Tumor Classification on MRI in Light of Molecular Markers [61.77272414423481]
Co-deletion of the 1p/19q gene is associated with clinical outcomes in low-grade gliomas.
This study aims to utilize a specially MRI-based convolutional neural network for brain cancer detection.
arXiv Detail & Related papers (2024-09-29T07:04:26Z) - Intraoperative Registration by Cross-Modal Inverse Neural Rendering [61.687068931599846]
We present a novel approach for 3D/2D intraoperative registration during neurosurgery via cross-modal inverse neural rendering.
Our approach separates implicit neural representation into two components, handling anatomical structure preoperatively and appearance intraoperatively.
We tested our method on retrospective patients' data from clinical cases, showing that our method outperforms state-of-the-art while meeting current clinical standards for registration.
arXiv Detail & Related papers (2024-09-18T13:40:59Z) - SurgicaL-CD: Generating Surgical Images via Unpaired Image Translation with Latent Consistency Diffusion Models [1.6189876649941652]
We introduce emphSurgicaL-CD, a consistency-distilled diffusion method to generate realistic surgical images.
Our results demonstrate that our method outperforms GANs and diffusion-based approaches.
arXiv Detail & Related papers (2024-08-19T09:19:25Z) - TotalSegmentator MRI: Sequence-Independent Segmentation of 59 Anatomical Structures in MR images [62.53931644063323]
In this study we extended the capabilities of TotalSegmentator to MR images.
We trained an nnU-Net segmentation algorithm on this dataset and calculated similarity coefficients (Dice) to evaluate the model's performance.
The model significantly outperformed two other publicly available segmentation models (Dice score 0.824 versus 0.762; p0.001 and 0.762 versus 0.542; p)
arXiv Detail & Related papers (2024-05-29T20:15:54Z) - X-Ray to CT Rigid Registration Using Scene Coordinate Regression [1.1687067206676627]
This paper proposes a fully automatic registration method that is robust to extreme viewpoints.
It is based on a fully convolutional neural network (CNN) that regresses the overlapping coordinates for a given X-ray image.
The proposed method achieved an average mean target registration error (mTRE) of 3.79 mm in the 50th percentile of the simulated test dataset and projected mTRE of 9.65 mm in the 50th percentile of real fluoroscopic images for pelvis registration.
arXiv Detail & Related papers (2023-11-25T17:48:46Z) - A Novel Mask R-CNN Model to Segment Heterogeneous Brain Tumors through
Image Subtraction [0.0]
We propose using a method performed by radiologists called image segmentation and applying it to machine learning models to prove a better segmentation.
Using Mask R-CNN, its ResNet backbone being pre-trained on the RSNA pneumonia detection challenge dataset, we can train a model on the Brats 2020 Brain Tumor dataset.
We can see how well the method of image subtraction works by comparing it to models without image subtraction through DICE coefficient (F1 score), recall, and precision on the untouched test set.
arXiv Detail & Related papers (2022-04-04T01:45:11Z) - A self-supervised learning strategy for postoperative brain cavity
segmentation simulating resections [46.414990784180546]
Convolutional neural networks (CNNs) are the state-of-the-art image segmentation technique.
CNNs require large annotated datasets for training.
Self-supervised learning strategies can leverage unlabeled data for training.
arXiv Detail & Related papers (2021-05-24T12:27:06Z) - Towards Unsupervised Learning for Instrument Segmentation in Robotic
Surgery with Cycle-Consistent Adversarial Networks [54.00217496410142]
We propose an unpaired image-to-image translation where the goal is to learn the mapping between an input endoscopic image and a corresponding annotation.
Our approach allows to train image segmentation models without the need to acquire expensive annotations.
We test our proposed method on Endovis 2017 challenge dataset and show that it is competitive with supervised segmentation methods.
arXiv Detail & Related papers (2020-07-09T01:39:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.