AI-based association analysis for medical imaging using latent-space
geometric confounder correction
- URL: http://arxiv.org/abs/2311.12836v1
- Date: Tue, 3 Oct 2023 16:09:07 GMT
- Title: AI-based association analysis for medical imaging using latent-space
geometric confounder correction
- Authors: Xianjing Liu, Bo Li, Meike W. Vernooij, Eppo B. Wolvius, Gennady V.
Roshchupkin, Esther E. Bron
- Abstract summary: We introduce an AI method emphasizing semantic feature interpretation and resilience against multiple confounders.
Our approach's merits are tested in three scenarios: extracting confounder-free features from a 2D synthetic dataset; examining the association between prenatal alcohol exposure and children's facial shapes using 3D mesh data.
Results confirm our method effectively reduces confounder influences, establishing less confounded associations.
- Score: 6.488049546344972
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: AI has greatly enhanced medical image analysis, yet its use in
epidemiological population imaging studies remains limited due to visualization
challenges in non-linear models and lack of confounder control. Addressing
this, we introduce an AI method emphasizing semantic feature interpretation and
resilience against multiple confounders. Our approach's merits are tested in
three scenarios: extracting confounder-free features from a 2D synthetic
dataset; examining the association between prenatal alcohol exposure and
children's facial shapes using 3D mesh data; exploring the relationship between
global cognition and brain images with a 3D MRI dataset. Results confirm our
method effectively reduces confounder influences, establishing less confounded
associations. Additionally, it provides a unique visual representation,
highlighting specific image alterations due to identified correlations.
Related papers
- Abnormality-Driven Representation Learning for Radiology Imaging [0.8321462983924758]
We introduce lesion-enhanced contrastive learning (LeCL), a novel approach to obtain visual representations driven by abnormalities in 2D axial slices across different locations of the CT scans.
We evaluate our approach across three clinical tasks: tumor lesion location, lung disease detection, and patient staging, benchmarking against four state-of-the-art foundation models.
arXiv Detail & Related papers (2024-11-25T13:53:26Z) - Autoregressive Sequence Modeling for 3D Medical Image Representation [48.706230961589924]
We introduce a pioneering method for learning 3D medical image representations through an autoregressive sequence pre-training framework.
Our approach various 3D medical images based on spatial, contrast, and semantic correlations, treating them as interconnected visual tokens within a token sequence.
arXiv Detail & Related papers (2024-09-13T10:19:10Z) - Learning Brain Tumor Representation in 3D High-Resolution MR Images via Interpretable State Space Models [42.55786269051626]
We propose a novel state-space-model (SSM)-based masked autoencoder which scales ViT-like models to handle high-resolution data effectively.
We propose a latent-to-spatial mapping technique that enables direct visualization of how latent features correspond to specific regions in the input volumes.
Our results highlight the potential of SSM-based self-supervised learning to transform radiomics analysis by combining efficiency and interpretability.
arXiv Detail & Related papers (2024-09-12T04:36:50Z) - QUBIQ: Uncertainty Quantification for Biomedical Image Segmentation Challenge [93.61262892578067]
Uncertainty in medical image segmentation tasks, especially inter-rater variability, presents a significant challenge.
This variability directly impacts the development and evaluation of automated segmentation algorithms.
We report the set-up and summarize the benchmark results of the Quantification of Uncertainties in Biomedical Image Quantification Challenge (QUBIQ)
arXiv Detail & Related papers (2024-03-19T17:57:24Z) - 3DTINC: Time-Equivariant Non-Contrastive Learning for Predicting Disease Progression from Longitudinal OCTs [8.502838668378432]
We propose a new longitudinal self-supervised learning method, 3DTINC, based on non-contrastive learning.
It is designed to learn perturbation-invariant features for 3D optical coherence tomography ( OCT) volumes, using augmentations specifically designed for OCT.
Our experiments show that this temporal information is crucial for predicting progression of retinal diseases, such as age-related macular degeneration (AMD)
arXiv Detail & Related papers (2023-12-28T11:47:12Z) - Genetic InfoMax: Exploring Mutual Information Maximization in
High-Dimensional Imaging Genetics Studies [50.11449968854487]
Genome-wide association studies (GWAS) are used to identify relationships between genetic variations and specific traits.
Representation learning for imaging genetics is largely under-explored due to the unique challenges posed by GWAS.
We introduce a trans-modal learning framework Genetic InfoMax (GIM) to address the specific challenges of GWAS.
arXiv Detail & Related papers (2023-09-26T03:59:21Z) - How You Split Matters: Data Leakage and Subject Characteristics Studies
in Longitudinal Brain MRI Analysis [0.0]
Deep learning models have revolutionized the field of medical image analysis, offering significant promise for improved diagnostics and patient care.
However, their performance can be misleadingly optimistic due to a hidden pitfall called 'data leakage'
In this study, we investigate data leakage in 3D medical imaging, specifically using 3D Convolutional Neural Networks (CNNs) for brain MRI analysis.
arXiv Detail & Related papers (2023-09-01T09:15:06Z) - Controllable Mind Visual Diffusion Model [58.83896307930354]
Brain signal visualization has emerged as an active research area, serving as a critical interface between the human visual system and computer vision models.
We propose a novel approach, referred to as Controllable Mind Visual Model Diffusion (CMVDM)
CMVDM extracts semantic and silhouette information from fMRI data using attribute alignment and assistant networks.
We then leverage a control model to fully exploit the extracted information for image synthesis, resulting in generated images that closely resemble the visual stimuli in terms of semantics and silhouette.
arXiv Detail & Related papers (2023-05-17T11:36:40Z) - DRAC: Diabetic Retinopathy Analysis Challenge with Ultra-Wide Optical
Coherence Tomography Angiography Images [51.27125547308154]
We organized a challenge named "DRAC - Diabetic Retinopathy Analysis Challenge" in conjunction with the 25th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2022)
The challenge consists of three tasks: segmentation of DR lesions, image quality assessment and DR grading.
This paper presents a summary and analysis of the top-performing solutions and results for each task of the challenge.
arXiv Detail & Related papers (2023-04-05T12:04:55Z) - 3D Reasoning for Unsupervised Anomaly Detection in Pediatric WbMRI [3.8583005413310625]
We show that incorporating the 3D context and processing whole-body MRI volumes is beneficial to distinguishing anomalies from their benign counterparts.
Our work also shows that it is beneficial to include additional patient-specific features to further improve anomaly detection in pediatric scans.
arXiv Detail & Related papers (2021-03-24T21:37:01Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.