M3Dsynth: A dataset of medical 3D images with AI-generated local
manipulations
- URL: http://arxiv.org/abs/2309.07973v2
- Date: Thu, 1 Feb 2024 09:50:08 GMT
- Title: M3Dsynth: A dataset of medical 3D images with AI-generated local
manipulations
- Authors: Giada Zingarini and Davide Cozzolino and Riccardo Corvi and Giovanni
Poggi and Luisa Verdoliva
- Abstract summary: M3Dsynth is a large dataset of manipulated Computed Tomography (CT) lung images.
We create manipulated images by injecting or removing lung cancer nodules in real CT scans.
Experiments show that these images easily fool automated diagnostic tools.
- Score: 10.20962191915879
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The ability to detect manipulated visual content is becoming increasingly
important in many application fields, given the rapid advances in image
synthesis methods. Of particular concern is the possibility of modifying the
content of medical images, altering the resulting diagnoses. Despite its
relevance, this issue has received limited attention from the research
community. One reason is the lack of large and curated datasets to use for
development and benchmarking purposes. Here, we investigate this issue and
propose M3Dsynth, a large dataset of manipulated Computed Tomography (CT) lung
images. We create manipulated images by injecting or removing lung cancer
nodules in real CT scans, using three different methods based on Generative
Adversarial Networks (GAN) or Diffusion Models (DM), for a total of 8,577
manipulated samples. Experiments show that these images easily fool automated
diagnostic tools. We also tested several state-of-the-art forensic detectors
and demonstrated that, once trained on the proposed dataset, they are able to
accurately detect and localize manipulated synthetic content, even when
training and test sets are not aligned, showing good generalization ability.
Dataset and code are publicly available at
https://grip-unina.github.io/M3Dsynth/.
Related papers
- Generative Enhancement for 3D Medical Images [74.17066529847546]
We propose GEM-3D, a novel generative approach to the synthesis of 3D medical images.
Our method begins with a 2D slice, noted as the informed slice to serve the patient prior, and propagates the generation process using a 3D segmentation mask.
By decomposing the 3D medical images into masks and patient prior information, GEM-3D offers a flexible yet effective solution for generating versatile 3D images.
arXiv Detail & Related papers (2024-03-19T15:57:04Z) - Enhancing Weakly Supervised 3D Medical Image Segmentation through
Probabilistic-aware Learning [52.249748801637196]
3D medical image segmentation is a challenging task with crucial implications for disease diagnosis and treatment planning.
Recent advances in deep learning have significantly enhanced fully supervised medical image segmentation.
We propose a novel probabilistic-aware weakly supervised learning pipeline, specifically designed for 3D medical imaging.
arXiv Detail & Related papers (2024-03-05T00:46:53Z) - Disruptive Autoencoders: Leveraging Low-level features for 3D Medical
Image Pre-training [51.16994853817024]
This work focuses on designing an effective pre-training framework for 3D radiology images.
We introduce Disruptive Autoencoders, a pre-training framework that attempts to reconstruct the original image from disruptions created by a combination of local masking and low-level perturbations.
The proposed pre-training framework is tested across multiple downstream tasks and achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-07-31T17:59:42Z) - MedGen3D: A Deep Generative Framework for Paired 3D Image and Mask
Generation [17.373961762646356]
We present MedGen3D, a framework that can generate paired 3D medical images and masks.
Our proposed framework guarantees accurate alignment between synthetic images and segmentation maps.
arXiv Detail & Related papers (2023-04-08T21:43:26Z) - Solving Sample-Level Out-of-Distribution Detection on 3D Medical Images [0.06117371161379209]
Out-of-distribution (OOD) detection helps to identify data samples, increasing the model's reliability.
Recent works have developed DL-based OOD detection that achieves promising results on 2D medical images.
However, scaling most of these approaches on 3D images is computationally intractable.
We propose a histogram-based method that requires no DL and achieves almost perfect results in this domain.
arXiv Detail & Related papers (2022-12-13T11:42:23Z) - 3D-QCNet -- A Pipeline for Automated Artifact Detection in Diffusion MRI
images [0.5735035463793007]
Artifacts are a common occurrence in Diffusion MRI (dMRI) scans.
Several QC methods for artifact detection exist, but they suffer from problems like requiring manual intervention and the inability to generalize across different artifacts and datasets.
We propose an automated deep learning (DL) pipeline that utilizes a 3D-Densenet architecture to train a model on diffusion volumes for automatic artifact detection.
arXiv Detail & Related papers (2021-03-09T08:21:53Z) - Automated Model Design and Benchmarking of 3D Deep Learning Models for
COVID-19 Detection with Chest CT Scans [72.04652116817238]
We propose a differentiable neural architecture search (DNAS) framework to automatically search for the 3D DL models for 3D chest CT scans classification.
We also exploit the Class Activation Mapping (CAM) technique on our models to provide the interpretability of the results.
arXiv Detail & Related papers (2021-01-14T03:45:01Z) - Generative Adversarial U-Net for Domain-free Medical Image Augmentation [49.72048151146307]
The shortage of annotated medical images is one of the biggest challenges in the field of medical image computing.
In this paper, we develop a novel generative method named generative adversarial U-Net.
Our newly designed model is domain-free and generalizable to various medical images.
arXiv Detail & Related papers (2021-01-12T23:02:26Z) - Planar 3D Transfer Learning for End to End Unimodal MRI Unbalanced Data
Segmentation [0.0]
We present a novel approach of 2D to 3D transfer learning based on mapping pre-trained 2D convolutional neural network weights into planar 3D kernels.
The method is validated by the proposed planar 3D res-u-net network with encoder transferred from the 2D VGG-16.
arXiv Detail & Related papers (2020-11-23T17:11:50Z) - Fed-Sim: Federated Simulation for Medical Imaging [131.56325440976207]
We introduce a physics-driven generative approach that consists of two learnable neural modules.
We show that our data synthesis framework improves the downstream segmentation performance on several datasets.
arXiv Detail & Related papers (2020-09-01T19:17:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.