Slice-wise quality assessment of high b-value breast DWI via deep learning-based artifact detection
- URL: http://arxiv.org/abs/2603.03941v1
- Date: Wed, 04 Mar 2026 11:00:42 GMT
- Title: Slice-wise quality assessment of high b-value breast DWI via deep learning-based artifact detection
- Authors: Ameya Markale, Luise Brock, Ihor Horishnyi, Dominika Skwierawska, Tri-Thien Nguyen, Hannes Schreiter, Shirin Heidarikahkesh, Lorenz A. Kapsner, Michael Uder, Sabine Ohlmeyer, Frederik B Laun, Andrzej Liebert, Sebastian Bickelhaupt,
- Abstract summary: Diffusion-weighted imaging (DWI) can support lesion detection and characterization in breast magnetic resonance imaging (MRI)<n> especially high b-value diffusion-weighted acquisitions can be prone to intensity artifacts that can affect diagnostic image assessment.<n>This study aims to detect both hyper- and hypointense artifacts on high b-value diffusion-weighted images using deep learning.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Diffusion-weighted imaging (DWI) can support lesion detection and characterization in breast magnetic resonance imaging (MRI), however especially high b-value diffusion-weighted acquisitions can be prone to intensity artifacts that can affect diagnostic image assessment. This study aims to detect both hyper- and hypointense artifacts on high b-value diffusion-weighted images (b=1500 s/mm2) using deep learning, employing either a binary classification (artifact presence) or a multiclass classification (artifact intensity) approach on a slice-wise dataset.This IRB-approved retrospective study used the single-center dataset comprising n=11806 slices from routine 3T breast MRI examinations performed between 2022 and mid-2023. Three convolutional neural network (CNN) architectures (DenseNet121, ResNet18, and SEResNet50) were trained for binary classification of hyper- and hypointense artifacts. The best performing model (DenseNet121) was applied to an independent holdout test set and was further trained separately for multiclass classification. Evaluation included area under receiver operating characteristic curve (AUROC), area under precision recall curve (AUPRC), precision, and recall, as well as analysis of predicted bounding box positions, derived from the network Grad-CAM heatmaps. DenseNet121 achieved AUROCs of 0.92 and 0.94 for hyper- and hypointense artifact detection, respectively, and weighted AUROCs of 0.85 and 0.88 for multiclass classification on single-slice high b-value diffusion-weighted images. A radiologist evaluated bounding box precision on a 1-5 Likert-like scale across 200 slices, achieving mean scores of 3.33+-1.04 for hyperintense artifacts and 2.62+-0.81 for hypointense artifacts. Hyper- and hypointense artifact detection in slice-wise breast DWI MRI dataset (b=1500 s/mm2) using CNNs particularly DenseNet121, seems promising and requires further validation.
Related papers
- Deep Generative Models for Enhanced Vitreous OCT Imaging [0.7130302992490973]
Conditional Denoising Diffusion Probabilistic Models (cDDPMs), Brownian Bridge Diffusion Models (BBDMs), U-Net, Pix2Pix, and Vector-Quantised Generative Adversarial Network (VQ-GAN) were evaluated.
arXiv Detail & Related papers (2025-11-02T10:36:59Z) - A Novel Attention-Augmented Wavelet YOLO System for Real-time Brain Vessel Segmentation on Transcranial Color-coded Doppler [49.03919553747297]
We propose an AI-powered, real-time CoW auto-segmentation system capable of efficiently capturing cerebral arteries.<n>No prior studies have explored AI-driven cerebrovascular segmentation using Transcranial Color-coded Doppler (TCCD)<n>The proposed AAW-YOLO demonstrated strong performance in segmenting both ipsilateral and contralateral CoW vessels.
arXiv Detail & Related papers (2025-08-19T14:41:22Z) - Semi-Supervised Anomaly Detection in Brain MRI Using a Domain-Agnostic Deep Reinforcement Learning Approach [2.3633885460047765]
We develop a domain-agnostic, semi-supervised anomaly detection framework using deep reinforcement learning (DRL) to address challenges such as large-scale data, overfitting, and class imbalance.<n>This study used publicly available brain MRI datasets collected between 2005 and 2021.
arXiv Detail & Related papers (2025-08-02T01:39:13Z) - Attenuation artifact detection and severity classification in intracoronary OCT using mixed image representations [2.334201943310467]
We propose a convolutional neural network that performs classification of the attenuation lines (A-lines) into three classes: no artifact, mild artifact and severe artifact.<n>Our method detects the presence of attenuation artifacts in OCT frames reaching F-scores of 0.77 and 0.94 for mild and severe artifacts, respectively.
arXiv Detail & Related papers (2025-03-07T11:01:00Z) - Unsupervised dMRI Artifact Detection via Angular Resolution Enhancement and Cycle Consistency Learning [45.3610312584439]
Diffusion magnetic resonance imaging (dMRI) is a crucial technique in neuroimaging studies, allowing for the non-invasive probing of the underlying structures of brain tissues.
Clinical dMRI data is susceptible to various artifacts during acquisition, which can lead to unreliable subsequent analyses.
We propose a novel unsupervised deep learning framework called $textbfU$n $textbfd$MRI $textbfA$rtifact $textbfD$etection via $textbfA$ngular Resolution Enhancement and $textbfC$ycle
arXiv Detail & Related papers (2024-09-24T08:56:10Z) - An edge detection-based deep learning approach for tear meniscus height measurement [20.311238180811404]
We introduce an automatic TMH measurement technique based on edge detection-assisted annotation within a deep learning framework.
For improved segmentation of the pupil and tear meniscus areas, the convolutional neural network Inceptionv3 was first implemented.
The algorithm can automatically screen images based on their quality,segment the pupil and tear meniscus areas, and automatically measure TMH.
arXiv Detail & Related papers (2024-03-23T14:16:26Z) - Comparison of retinal regions-of-interest imaged by OCT for the
classification of intermediate AMD [3.0171643773711208]
A total of 15744 B-scans from 269 intermediate AMD patients and 115 normal subjects were used in this study.
For each subset, a convolutional neural network (based on VGG16 architecture and pre-trained on ImageNet) was trained and tested.
The performance of the models was evaluated using the area under the receiver operating characteristic (AUROC), accuracy, sensitivity, and specificity.
arXiv Detail & Related papers (2023-05-04T13:48:55Z) - Attention-based Saliency Maps Improve Interpretability of Pneumothorax
Classification [52.77024349608834]
To investigate chest radiograph (CXR) classification performance of vision transformers (ViT) and interpretability of attention-based saliency.
ViTs were fine-tuned for lung disease classification using four public data sets: CheXpert, Chest X-Ray 14, MIMIC CXR, and VinBigData.
ViTs had comparable CXR classification AUCs compared with state-of-the-art CNNs.
arXiv Detail & Related papers (2023-03-03T12:05:41Z) - Affinity Feature Strengthening for Accurate, Complete and Robust Vessel
Segmentation [48.638327652506284]
Vessel segmentation is crucial in many medical image applications, such as detecting coronary stenoses, retinal vessel diseases and brain aneurysms.
We present a novel approach, the affinity feature strengthening network (AFN), which jointly models geometry and refines pixel-wise segmentation features using a contrast-insensitive, multiscale affinity approach.
arXiv Detail & Related papers (2022-11-12T05:39:17Z) - 3D Structural Analysis of the Optic Nerve Head to Robustly Discriminate
Between Papilledema and Optic Disc Drusen [44.754910718620295]
We developed a deep learning algorithm to identify major tissue structures of the optic nerve head (ONH) in 3D optical coherence tomography ( OCT) scans.
A classification algorithm was designed using 150 OCT volumes to perform 3-class classifications (1: ODD, 2: papilledema, 3: healthy) strictly from their drusen and prelamina swelling scores.
Our AI approach accurately discriminated ODD from papilledema, using a single OCT scan.
arXiv Detail & Related papers (2021-12-18T17:05:53Z) - CNN Filter Learning from Drawn Markers for the Detection of Suggestive
Signs of COVID-19 in CT Images [58.720142291102135]
We propose a method that does not require either large annotated datasets or backpropagation to estimate the filters of a convolutional neural network (CNN)
For a few CT images, the user draws markers at representative normal and abnormal regions.
The method generates a feature extractor composed of a sequence of convolutional layers, whose kernels are specialized in enhancing regions similar to the marked ones.
arXiv Detail & Related papers (2021-11-16T15:03:42Z) - Vision Transformers for femur fracture classification [59.99241204074268]
The Vision Transformer (ViT) was able to correctly predict 83% of the test images.
Good results were obtained in sub-fractures with the largest and richest dataset ever.
arXiv Detail & Related papers (2021-08-07T10:12:42Z) - Machine-Learning-Based Multiple Abnormality Prediction with Large-Scale
Chest Computed Tomography Volumes [64.21642241351857]
We curated and analyzed a chest computed tomography (CT) data set of 36,316 volumes from 19,993 unique patients.
We developed a rule-based method for automatically extracting abnormality labels from free-text radiology reports.
We also developed a model for multi-organ, multi-disease classification of chest CT volumes.
arXiv Detail & Related papers (2020-02-12T00:59:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.