Reducing Textural Bias Improves Robustness of Deep Segmentation CNNs
- URL: http://arxiv.org/abs/2011.15093v1
- Date: Mon, 30 Nov 2020 18:29:53 GMT
- Title: Reducing Textural Bias Improves Robustness of Deep Segmentation CNNs
- Authors: Seoin Chai, Daniel Rueckert, Ahmed E. Fetit
- Abstract summary: Recent findings on natural images suggest that deep neural models can show a textural bias when carrying out image classification tasks.
This study aims to investigate ways in which addressing the textural bias phenomenon could be used to bring up the robustness and transferability of deep segmentation models.
- Score: 8.736194193307451
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Despite current advances in deep learning, domain shift remains a common
problem in medical imaging settings. Recent findings on natural images suggest
that deep neural models can show a textural bias when carrying out image
classification tasks, which goes against the common understanding of
convolutional neural networks (CNNs) recognising objects through increasingly
complex representations of shape. This study draws inspiration from recent
findings on natural images and aims to investigate ways in which addressing the
textural bias phenomenon could be used to bring up the robustness and
transferability of deep segmentation models when applied to three-dimensional
(3D) medical data. To achieve this, publicly available MRI scans from the
Developing Human Connectome Project are used to investigate ways in which
simulating textural noise can help train robust models in a complex
segmentation task. Our findings illustrate how applying specific types of
textural filters prior to training the models can increase their ability to
segment scans corrupted by previously unseen noise.
Related papers
- ShapeMamba-EM: Fine-Tuning Foundation Model with Local Shape Descriptors and Mamba Blocks for 3D EM Image Segmentation [49.42525661521625]
This paper presents ShapeMamba-EM, a specialized fine-tuning method for 3D EM segmentation.
It is tested over a wide range of EM images, covering five segmentation tasks and 10 datasets.
arXiv Detail & Related papers (2024-08-26T08:59:22Z) - μ-Net: A Deep Learning-Based Architecture for μ-CT Segmentation [2.012378666405002]
X-ray computed microtomography (mu-CT) is a non-destructive technique that can generate high-resolution 3D images of the internal anatomy of medical and biological samples.
extracting relevant information from 3D images requires semantic segmentation of the regions of interest.
We propose a novel framework that uses a convolutional neural network (CNN) to automatically segment the full morphology of the heart of Carassius auratus.
arXiv Detail & Related papers (2024-06-24T15:29:08Z) - Artifact Reduction in 3D and 4D Cone-beam Computed Tomography Images with Deep Learning -- A Review [0.0]
Deep learning techniques have been used to improve image quality in cone-beam computed tomography (CBCT)
We provide an overview of deep learning techniques that have successfully been shown to reduce artifacts in 3D, as well as in time-resolved (4D) CBCT.
One of the key findings of this work is an observed trend towards the use of generative models including GANs and score-based or diffusion models.
arXiv Detail & Related papers (2024-03-27T13:46:01Z) - Learning Multimodal Volumetric Features for Large-Scale Neuron Tracing [72.45257414889478]
We aim to reduce human workload by predicting connectivity between over-segmented neuron pieces.
We first construct a dataset, named FlyTracing, that contains millions of pairwise connections of segments expanding the whole fly brain.
We propose a novel connectivity-aware contrastive learning method to generate dense volumetric EM image embedding.
arXiv Detail & Related papers (2024-01-05T19:45:12Z) - Retinal OCT Synthesis with Denoising Diffusion Probabilistic Models for
Layer Segmentation [2.4113205575263708]
We propose an image synthesis method that utilizes denoising diffusion probabilistic models (DDPMs) to automatically generate retinal optical coherence tomography ( OCT) images.
We observe a consistent improvement in layer segmentation accuracy, which is validated using various neural networks.
These findings demonstrate the promising potential of DDPMs in reducing the need for manual annotations of retinal OCT images.
arXiv Detail & Related papers (2023-11-09T16:09:24Z) - Disruptive Autoencoders: Leveraging Low-level features for 3D Medical
Image Pre-training [51.16994853817024]
This work focuses on designing an effective pre-training framework for 3D radiology images.
We introduce Disruptive Autoencoders, a pre-training framework that attempts to reconstruct the original image from disruptions created by a combination of local masking and low-level perturbations.
The proposed pre-training framework is tested across multiple downstream tasks and achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-07-31T17:59:42Z) - Ultrasound Signal Processing: From Models to Deep Learning [64.56774869055826]
Medical ultrasound imaging relies heavily on high-quality signal processing to provide reliable and interpretable image reconstructions.
Deep learning based methods, which are optimized in a data-driven fashion, have gained popularity.
A relatively new paradigm combines the power of the two: leveraging data-driven deep learning, as well as exploiting domain knowledge.
arXiv Detail & Related papers (2022-04-09T13:04:36Z) - Feature visualization for convolutional neural network models trained on
neuroimaging data [0.0]
We show for the first time results using feature visualization of convolutional neural networks (CNNs)
We have trained CNNs for different tasks including sex classification and artificial lesion classification based on structural magnetic resonance imaging (MRI) data.
The resulting images reveal the learned concepts of the artificial lesions, including their shapes, but remain hard to interpret for abstract features in the sex classification task.
arXiv Detail & Related papers (2022-03-24T15:24:38Z) - Morphological Operation Residual Blocks: Enhancing 3D Morphological
Feature Representation in Convolutional Neural Networks for Semantic
Segmentation of Medical Images [0.8594140167290099]
This study proposed a novel network block architecture that embedded the morphological operation as an infinitely strong prior in the convolutional neural network.
Several 3D deep learning models with the proposed morphological operation block were built and compared in different medical imaging segmentation tasks.
arXiv Detail & Related papers (2021-03-06T04:41:37Z) - Continuous Emotion Recognition with Spatiotemporal Convolutional Neural
Networks [82.54695985117783]
We investigate the suitability of state-of-the-art deep learning architectures for continuous emotion recognition using long video sequences captured in-the-wild.
We have developed and evaluated convolutional recurrent neural networks combining 2D-CNNs and long short term-memory units, and inflated 3D-CNN models, which are built by inflating the weights of a pre-trained 2D-CNN model during fine-tuning.
arXiv Detail & Related papers (2020-11-18T13:42:05Z) - Interpretation of 3D CNNs for Brain MRI Data Classification [56.895060189929055]
We extend the previous findings in gender differences from diffusion-tensor imaging on T1 brain MRI scans.
We provide the voxel-wise 3D CNN interpretation comparing the results of three interpretation methods.
arXiv Detail & Related papers (2020-06-20T17:56:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.