Improving Artifact Robustness for CT Deep Learning Models Without Labeled Artifact Images via Domain Adaptation
- URL: http://arxiv.org/abs/2510.06584v1
- Date: Wed, 08 Oct 2025 02:27:09 GMT
- Title: Improving Artifact Robustness for CT Deep Learning Models Without Labeled Artifact Images via Domain Adaptation
- Authors: Justin Cheung, Samuel Savine, Calvin Nguyen, Lin Lu, Alhassan S. Yasin,
- Abstract summary: This study evaluates domain adaptation as an approach for training models that maintain classification performance despite new artifacts.<n>We simulate ring artifacts from detector gain error in sinogram space and evaluate domain adversarial neural networks (DANN) against baseline and augmentation-based approaches on the OrganAMNIST abdominal CT dataset.<n>Our results demonstrate that baseline models trained only on clean images fail to generalize to images with ring artifacts, and traditional augmentation with other distortion types provides no improvement on unseen artifact domains.
- Score: 2.7001982817730616
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning models which perform well on images from their training distribution can degrade substantially when applied to new distributions. If a CT scanner introduces a new artifact not present in the training labels, the model may misclassify the images. Although modern CT scanners include design features which mitigate these artifacts, unanticipated or difficult-to-mitigate artifacts can still appear in practice. The direct solution of labeling images from this new distribution can be costly. As a more accessible alternative, this study evaluates domain adaptation as an approach for training models that maintain classification performance despite new artifacts, even without corresponding labels. We simulate ring artifacts from detector gain error in sinogram space and evaluate domain adversarial neural networks (DANN) against baseline and augmentation-based approaches on the OrganAMNIST abdominal CT dataset. Our results demonstrate that baseline models trained only on clean images fail to generalize to images with ring artifacts, and traditional augmentation with other distortion types provides no improvement on unseen artifact domains. In contrast, the DANN approach successfully maintains high classification accuracy on ring artifact images using only unlabeled artifact data during training, demonstrating the viability of domain adaptation for artifact robustness. The domain-adapted model achieved classification performance on ring artifact test data comparable to models explicitly trained with labeled artifact images, while also showing unexpected generalization to uniform noise. These findings provide empirical evidence that domain adaptation can effectively address distribution shift in medical imaging without requiring expensive expert labeling of new artifact distributions, suggesting promise for deployment in clinical settings where novel artifacts may emerge.
Related papers
- See and Fix the Flaws: Enabling VLMs and Diffusion Models to Comprehend Visual Artifacts via Agentic Data Synthesis [17.896266572037348]
ArtiAgent efficiently creates pairs of real and artifact-injected images.<n>It comprises three agents: a perception agent that recognizes entities and subentities from real images, a synthesis agent that introduces artifacts via artifact injection tools, and a curation agent that filters the synthesized artifacts.
arXiv Detail & Related papers (2026-02-24T14:34:13Z) - Low performing pixel correction in computed tomography with unrolled network and synthetic data training [0.16777183511743465]
Low performance pixels (LPP) in Computed Tomography (CT) detectors would lead to ring and streak artifacts in reconstructed images.<n>We propose an unrolled dual-domain method based on synthetic data to correct LPP artifacts.
arXiv Detail & Related papers (2026-01-28T19:46:30Z) - DiffusionQC: Artifact Detection in Histopathology via Diffusion Model [2.29008216212261]
We propose DiffusionQC, which detects artifacts as outliers among clean images using a diffusion model.<n>It requires only a set of clean images for training rather than pixel-level artifact annotations and predefined artifact types.<n> Empirical results demonstrate superior performance to state-of-the-art and offer cross-stain generalization capacity.
arXiv Detail & Related papers (2026-01-18T02:59:26Z) - Synthesizing Artifact Dataset for Pixel-level Detection [16.31703475992344]
Artifact detectors enhance the performance of image-generative models by serving as reward models during fine-tuning.<n>We propose an artifact corruption pipeline that automatically injects artifacts into clean, high-quality synthetic images on a predetermined region.<n>The proposed method achieves performance improvements of 13.2% for ConvNeXt and 3.7% for Swin-T, as verified on human-labeled data.
arXiv Detail & Related papers (2025-09-23T21:28:33Z) - DiffDoctor: Diagnosing Image Diffusion Models Before Treating [57.82359018425674]
We propose DiffDoctor, a two-stage pipeline to assist image diffusion models in generating fewer artifacts.<n>We collect a dataset of over 1M flawed synthesized images and set up an efficient human-in-the-loop annotation process.<n>The learned artifact detector is then involved in the second stage to optimize the diffusion model by providing pixel-level feedback.
arXiv Detail & Related papers (2025-01-21T18:56:41Z) - A Bias-Free Training Paradigm for More General AI-generated Image Detection [15.421102443599773]
A well-designed forensic detector should detect generator specific artifacts rather than reflect data biases.<n>We propose B-Free, a bias-free training paradigm, where fake images are generated from real ones.<n>We show significant improvements in both generalization and robustness over state-of-the-art detectors.
arXiv Detail & Related papers (2024-12-23T15:54:32Z) - Perceptual Artifacts Localization for Image Synthesis Tasks [59.638307505334076]
We introduce a novel dataset comprising 10,168 generated images, each annotated with per-pixel perceptual artifact labels.
A segmentation model, trained on our proposed dataset, effectively localizes artifacts across a range of tasks.
We propose an innovative zoom-in inpainting pipeline that seamlessly rectifies perceptual artifacts in the generated images.
arXiv Detail & Related papers (2023-10-09T10:22:08Z) - Orientation-Shared Convolution Representation for CT Metal Artifact
Learning [63.67718355820655]
During X-ray computed tomography (CT) scanning, metallic implants carrying with patients often lead to adverse artifacts.
Existing deep-learning-based methods have gained promising reconstruction performance.
We propose an orientation-shared convolution representation strategy to adapt the physical prior structures of artifacts.
arXiv Detail & Related papers (2022-12-26T13:56:12Z) - Learning MRI Artifact Removal With Unpaired Data [74.48301038665929]
Retrospective artifact correction (RAC) improves image quality post acquisition and enhances image usability.
Recent machine learning driven techniques for RAC are predominantly based on supervised learning.
Here we show that unwanted image artifacts can be disentangled and removed from an image via an RAC neural network learned with unpaired data.
arXiv Detail & Related papers (2021-10-09T16:09:27Z) - CutPaste: Self-Supervised Learning for Anomaly Detection and
Localization [59.719925639875036]
We propose a framework for building anomaly detectors using normal training data only.
We first learn self-supervised deep representations and then build a generative one-class classifier on learned representations.
Our empirical study on MVTec anomaly detection dataset demonstrates the proposed algorithm is general to be able to detect various types of real-world defects.
arXiv Detail & Related papers (2021-04-08T19:04:55Z) - Weakly- and Semi-Supervised Probabilistic Segmentation and
Quantification of Ultrasound Needle-Reverberation Artifacts to Allow Better
AI Understanding of Tissue Beneath Needles [0.0]
We propose a probabilistic needle-and-reverberation-artifact segmentation algorithm to separate desired tissue-based pixel values from superimposed artifacts.
Our method matches state-of-the-art artifact segmentation performance and sets a new standard in estimating the per-pixel contributions of artifact vs underlying anatomy.
arXiv Detail & Related papers (2020-11-24T08:34:38Z) - Improved Slice-wise Tumour Detection in Brain MRIs by Computing
Dissimilarities between Latent Representations [68.8204255655161]
Anomaly detection for Magnetic Resonance Images (MRIs) can be solved with unsupervised methods.
We have proposed a slice-wise semi-supervised method for tumour detection based on the computation of a dissimilarity function in the latent space of a Variational AutoEncoder.
We show that by training the models on higher resolution images and by improving the quality of the reconstructions, we obtain results which are comparable with different baselines.
arXiv Detail & Related papers (2020-07-24T14:02:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.