Genetic InfoMax: Exploring Mutual Information Maximization in
High-Dimensional Imaging Genetics Studies
- URL: http://arxiv.org/abs/2309.15132v1
- Date: Tue, 26 Sep 2023 03:59:21 GMT
- Title: Genetic InfoMax: Exploring Mutual Information Maximization in
High-Dimensional Imaging Genetics Studies
- Authors: Yaochen Xie, Ziqian Xie, Sheikh Muhammad Saiful Islam, Degui Zhi,
Shuiwang Ji
- Abstract summary: Genome-wide association studies (GWAS) are used to identify relationships between genetic variations and specific traits.
Representation learning for imaging genetics is largely under-explored due to the unique challenges posed by GWAS.
We introduce a trans-modal learning framework Genetic InfoMax (GIM) to address the specific challenges of GWAS.
- Score: 50.11449968854487
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Genome-wide association studies (GWAS) are used to identify relationships
between genetic variations and specific traits. When applied to
high-dimensional medical imaging data, a key step is to extract
lower-dimensional, yet informative representations of the data as traits.
Representation learning for imaging genetics is largely under-explored due to
the unique challenges posed by GWAS in comparison to typical visual
representation learning. In this study, we tackle this problem from the mutual
information (MI) perspective by identifying key limitations of existing
methods. We introduce a trans-modal learning framework Genetic InfoMax (GIM),
including a regularized MI estimator and a novel genetics-informed transformer
to address the specific challenges of GWAS. We evaluate GIM on human brain 3D
MRI data and establish standardized evaluation protocols to compare it to
existing approaches. Our results demonstrate the effectiveness of GIM and a
significantly improved performance on GWAS.
Related papers
- Discriminative Hamiltonian Variational Autoencoder for Accurate Tumor Segmentation in Data-Scarce Regimes [2.8498944632323755]
We propose an end-to-end hybrid architecture for medical image segmentation.
We use Hamiltonian Variational Autoencoders (HVAE) and a discriminative regularization to improve the quality of generated images.
Our architecture operates on a slice-by-slice basis to segment 3D volumes, capitilizing on the richly augmented dataset.
arXiv Detail & Related papers (2024-06-17T15:42:08Z) - MGI: Multimodal Contrastive pre-training of Genomic and Medical Imaging [16.325123491357203]
We propose a multimodal pre-training framework that jointly incorporates genomics and medical images for downstream tasks.
We align medical images and genes using a self-supervised contrastive learning approach which combines the Mamba as a genetic encoder and the Vision Transformer (ViT) as a medical image encoder.
arXiv Detail & Related papers (2024-06-02T06:20:45Z) - EMIT-Diff: Enhancing Medical Image Segmentation via Text-Guided
Diffusion Model [4.057796755073023]
We develop controllable diffusion models for medical image synthesis, called EMIT-Diff.
We leverage recent diffusion probabilistic models to generate realistic and diverse synthetic medical image data.
In our approach, we ensure that the synthesized samples adhere to medically relevant constraints.
arXiv Detail & Related papers (2023-10-19T16:18:02Z) - Unsupervised ensemble-based phenotyping helps enhance the
discoverability of genes related to heart morphology [57.25098075813054]
We propose a new framework for gene discovery entitled Un Phenotype Ensembles.
It builds a redundant yet highly expressive representation by pooling a set of phenotypes learned in an unsupervised manner.
These phenotypes are then analyzed via (GWAS), retaining only highly confident and stable associations.
arXiv Detail & Related papers (2023-01-07T18:36:44Z) - Differentiable Agent-based Epidemiology [71.81552021144589]
We introduce GradABM: a scalable, differentiable design for agent-based modeling that is amenable to gradient-based learning with automatic differentiation.
GradABM can quickly simulate million-size populations in few seconds on commodity hardware, integrate with deep neural networks and ingest heterogeneous data sources.
arXiv Detail & Related papers (2022-07-20T07:32:02Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - ContIG: Self-supervised Multimodal Contrastive Learning for Medical
Imaging with Genetics [4.907551775445731]
We propose ContIG, a self-supervised method that can learn from large datasets of unlabeled medical images and genetic data.
Our approach aligns images and several genetic modalities in the feature space using a contrastive loss.
We also perform genome-wide association studies on the features learned by our models, uncovering interesting relationships between images and genetic data.
arXiv Detail & Related papers (2021-11-26T11:06:12Z) - G-MIND: An End-to-End Multimodal Imaging-Genetics Framework for
Biomarker Identification and Disease Classification [49.53651166356737]
We propose a novel deep neural network architecture to integrate imaging and genetics data, as guided by diagnosis, that provides interpretable biomarkers.
We have evaluated our model on a population study of schizophrenia that includes two functional MRI (fMRI) paradigms and Single Nucleotide Polymorphism (SNP) data.
arXiv Detail & Related papers (2021-01-27T19:28:04Z) - Select-ProtoNet: Learning to Select for Few-Shot Disease Subtype
Prediction [55.94378672172967]
We focus on few-shot disease subtype prediction problem, identifying subgroups of similar patients.
We introduce meta learning techniques to develop a new model, which can extract the common experience or knowledge from interrelated clinical tasks.
Our new model is built upon a carefully designed meta-learner, called Prototypical Network, that is a simple yet effective meta learning machine for few-shot image classification.
arXiv Detail & Related papers (2020-09-02T02:50:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.