3D-QCNet -- A Pipeline for Automated Artifact Detection in Diffusion MRI
images
- URL: http://arxiv.org/abs/2103.05285v1
- Date: Tue, 9 Mar 2021 08:21:53 GMT
- Title: 3D-QCNet -- A Pipeline for Automated Artifact Detection in Diffusion MRI
images
- Authors: Adnan Ahmad, Drew Parker, Zahra Riahi Samani, Ragini Verma
- Abstract summary: Artifacts are a common occurrence in Diffusion MRI (dMRI) scans.
Several QC methods for artifact detection exist, but they suffer from problems like requiring manual intervention and the inability to generalize across different artifacts and datasets.
We propose an automated deep learning (DL) pipeline that utilizes a 3D-Densenet architecture to train a model on diffusion volumes for automatic artifact detection.
- Score: 0.5735035463793007
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artifacts are a common occurrence in Diffusion MRI (dMRI) scans. Identifying
and removing them is essential to ensure the accuracy and viability of any post
processing carried out on these scans. This makes QC (quality control) a
crucial first step prior to any analysis of dMRI data. Several QC methods for
artifact detection exist, however they suffer from problems like requiring
manual intervention and the inability to generalize across different artifacts
and datasets. In this paper, we propose an automated deep learning (DL)
pipeline that utilizes a 3D-Densenet architecture to train a model on diffusion
volumes for automatic artifact detection. Our method is applied on a vast
dataset consisting of 9000 volumes sourced from 7 large clinical datasets.
These datasets comprise scans from multiple scanners with different gradient
directions, high and low b values, single shell and multi shell acquisitions.
Additionally, they represent diverse subject demographics like the presence or
absence of pathologies. Our QC method is found to accurately generalize across
this heterogenous data by correctly detecting 92% artifacts on average across
our test set. This consistent performance over diverse datasets underlines the
generalizability of our method, which currently is a significant barrier
hindering the widespread adoption of automated QC techniques. For these
reasons, we believe that 3D-QCNet can be integrated in diffusion pipelines to
effectively automate the arduous and time-intensive process of artifact
detection.
Related papers
- Self-supervised Feature Adaptation for 3D Industrial Anomaly Detection [59.41026558455904]
We focus on multi-modal anomaly detection. Specifically, we investigate early multi-modal approaches that attempted to utilize models pre-trained on large-scale visual datasets.
We propose a Local-to-global Self-supervised Feature Adaptation (LSFA) method to finetune the adaptors and learn task-oriented representation toward anomaly detection.
arXiv Detail & Related papers (2024-01-06T07:30:41Z) - M3Dsynth: A dataset of medical 3D images with AI-generated local
manipulations [10.20962191915879]
M3Dsynth is a large dataset of manipulated Computed Tomography (CT) lung images.
We create manipulated images by injecting or removing lung cancer nodules in real CT scans.
Experiments show that these images easily fool automated diagnostic tools.
arXiv Detail & Related papers (2023-09-14T18:16:58Z) - Source-Free Collaborative Domain Adaptation via Multi-Perspective
Feature Enrichment for Functional MRI Analysis [55.03872260158717]
Resting-state MRI functional (rs-fMRI) is increasingly employed in multi-site research to aid neurological disorder analysis.
Many methods have been proposed to reduce fMRI heterogeneity between source and target domains.
But acquiring source data is challenging due to concerns and/or data storage burdens in multi-site studies.
We design a source-free collaborative domain adaptation framework for fMRI analysis, where only a pretrained source model and unlabeled target data are accessible.
arXiv Detail & Related papers (2023-08-24T01:30:18Z) - Wide Range MRI Artifact Removal with Transformers [1.1305386767685186]
Artifacts on magnetic resonance scans are a serious challenge for radiologists and computer-aided diagnosis systems.
We propose a method capable of retrospectively removing eight common artifacts found in native volumetric MR imagery.
Our method is realized through the design of a novel transformer-based neural network that generalizes a emph windowcentered approach by the Swin transformer.
arXiv Detail & Related papers (2022-10-14T17:16:03Z) - Self-Supervised Masked Convolutional Transformer Block for Anomaly
Detection [122.4894940892536]
We present a novel self-supervised masked convolutional transformer block (SSMCTB) that comprises the reconstruction-based functionality at a core architectural level.
In this work, we extend our previous self-supervised predictive convolutional attentive block (SSPCAB) with a 3D masked convolutional layer, a transformer for channel-wise attention, as well as a novel self-supervised objective based on Huber loss.
arXiv Detail & Related papers (2022-09-25T04:56:10Z) - M2TR: Multi-modal Multi-scale Transformers for Deepfake Detection [74.19291916812921]
forged images generated by Deepfake techniques pose a serious threat to the trustworthiness of digital information.
In this paper, we aim to capture the subtle manipulation artifacts at different scales for Deepfake detection.
We introduce a high-quality Deepfake dataset, SR-DF, which consists of 4,000 DeepFake videos generated by state-of-the-art face swapping and facial reenactment methods.
arXiv Detail & Related papers (2021-04-20T05:43:44Z) - Automated Model Design and Benchmarking of 3D Deep Learning Models for
COVID-19 Detection with Chest CT Scans [72.04652116817238]
We propose a differentiable neural architecture search (DNAS) framework to automatically search for the 3D DL models for 3D chest CT scans classification.
We also exploit the Class Activation Mapping (CAM) technique on our models to provide the interpretability of the results.
arXiv Detail & Related papers (2021-01-14T03:45:01Z) - Fader Networks for domain adaptation on fMRI: ABIDE-II study [68.5481471934606]
We use 3D convolutional autoencoders to build the domain irrelevant latent space image representation and demonstrate this method to outperform existing approaches on ABIDE data.
arXiv Detail & Related papers (2020-10-14T16:50:50Z) - Uniformizing Techniques to Process CT scans with 3D CNNs for
Tuberculosis Prediction [5.270882613122642]
A common approach to medical image analysis on volumetric data uses deep 2D convolutional neural networks (CNNs)
dealing with the individual slices independently in 2D CNNs deliberately discards the depth information which results in poor performance for the intended task.
We evaluate a set of volume uniformizing methods to address the aforementioned issues.
We report 73% area under curve (AUC) and binary classification accuracy (ACC) of 67.5% on the test set beating all methods which leveraged only image information.
arXiv Detail & Related papers (2020-07-26T21:53:47Z) - Volumetric landmark detection with a multi-scale shift equivariant
neural network [16.114319747246334]
We propose a multi-scale, end-to-end deep learning method that achieves fast and memory-efficient landmark detection in 3D images.
We evaluate our method for carotid artery bifurcations detection on 263 CT volumes and achieve a better than state-of-the-art accuracy with mean Euclidean distance error of 2.81mm.
arXiv Detail & Related papers (2020-03-03T17:06:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.