Graph Contrastive Learning for Multi-omics Data
- URL: http://arxiv.org/abs/2301.02242v1
- Date: Tue, 3 Jan 2023 10:03:08 GMT
- Title: Graph Contrastive Learning for Multi-omics Data
- Authors: Nishant Rajadhyaksha and Aarushi Chitkara
- Abstract summary: We present a learnining framework named Multi-Omics Graph Contrastive Learner(MOGCL)
We show that pre-training graph models with a contrastive methodology along with fine-tuning it in a supervised manner is an efficient strategy for multi-omics data classification.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Advancements in technologies related to working with omics data require novel
computation methods to fully leverage information and help develop a better
understanding of human diseases. This paper studies the effects of introducing
graph contrastive learning to help leverage graph structure and information to
produce better representations for downstream classification tasks for
multi-omics datasets. We present a learnining framework named Multi-Omics Graph
Contrastive Learner(MOGCL) which outperforms several aproaches for integrating
multi-omics data for supervised learning tasks. We show that pre-training graph
models with a contrastive methodology along with fine-tuning it in a supervised
manner is an efficient strategy for multi-omics data classification.
Related papers
- Domain Generalization for Mammographic Image Analysis with Contrastive
Learning [62.25104935889111]
The training of an efficacious deep learning model requires large data with diverse styles and qualities.
A novel contrastive learning is developed to equip the deep learning models with better style generalization capability.
The proposed method has been evaluated extensively and rigorously with mammograms from various vendor style domains and several public datasets.
arXiv Detail & Related papers (2023-04-20T11:40:21Z) - Multi-modal Multi-kernel Graph Learning for Autism Prediction and
Biomarker Discovery [29.790200009136825]
We propose a novel method to offset the negative impact between modalities in the process of multi-modal integration and extract heterogeneous information from graphs.
Our method is evaluated on the benchmark Autism Brain Imaging Data Exchange (ABIDE) dataset and outperforms the state-of-the-art methods.
In addition, discriminative brain regions associated with autism are identified by our model, providing guidance for the study of autism pathology.
arXiv Detail & Related papers (2023-03-03T07:09:17Z) - Cross-view Graph Contrastive Representation Learning on Partially
Aligned Multi-view Data [52.491074276133325]
Multi-view representation learning has developed rapidly over the past decades and has been applied in many fields.
We propose a new cross-view graph contrastive learning framework, which integrates multi-view information to align data and learn latent representations.
Experiments conducted on several real datasets demonstrate the effectiveness of the proposed method on the clustering and classification tasks.
arXiv Detail & Related papers (2022-11-08T09:19:32Z) - Multi-modal Graph Learning for Disease Prediction [35.156975779372836]
We propose an end-to-end Multi-modal Graph Learning framework (MMGL) for disease prediction with multi-modality.
Instead of defining the graph manually, the latent graph structure is captured through an effective way of adaptive graph learning.
An extensive group of experiments on two disease prediction tasks demonstrates that the proposed MMGL achieves more favorable performance.
arXiv Detail & Related papers (2022-03-11T12:33:20Z) - Adversarial Graph Contrastive Learning with Information Regularization [51.14695794459399]
Contrastive learning is an effective method in graph representation learning.
Data augmentation on graphs is far less intuitive and much harder to provide high-quality contrastive samples.
We propose a simple but effective method, Adversarial Graph Contrastive Learning (ARIEL)
It consistently outperforms the current graph contrastive learning methods in the node classification task over various real-world datasets.
arXiv Detail & Related papers (2022-02-14T05:54:48Z) - InfoGCL: Information-Aware Graph Contrastive Learning [26.683911257080304]
We study how graph information is transformed and transferred during the contrastive learning process.
We propose an information-aware graph contrastive learning framework called InfoGCL.
We show for the first time that all recent graph contrastive learning methods can be unified by our framework.
arXiv Detail & Related papers (2021-10-28T21:10:39Z) - Effective and Efficient Graph Learning for Multi-view Clustering [173.8313827799077]
We propose an effective and efficient graph learning model for multi-view clustering.
Our method exploits the view-similar between graphs of different views by the minimization of tensor Schatten p-norm.
Our proposed algorithm is time-economical and obtains the stable results and scales well with the data size.
arXiv Detail & Related papers (2021-08-15T13:14:28Z) - Multi-modal Graph Learning for Disease Prediction [35.4310911850558]
We propose an end-to-end Multimodal Graph Learning framework (MMGL) for disease prediction.
Instead of defining the adjacency matrix manually as existing methods, the latent graph structure can be captured through a novel way of adaptive graph learning.
arXiv Detail & Related papers (2021-07-01T03:59:22Z) - Multiple Graph Learning for Scalable Multi-view Clustering [26.846642220480863]
We propose an efficient multiple graph learning model via a small number of anchor points and tensor Schatten p-norm minimization.
Specifically, we construct a hidden and tractable large graph by anchor graph for each view.
We develop an efficient algorithm, which scales linearly with the data size, to solve our proposed model.
arXiv Detail & Related papers (2021-06-29T13:10:56Z) - Model-Agnostic Graph Regularization for Few-Shot Learning [60.64531995451357]
We present a comprehensive study on graph embedded few-shot learning.
We introduce a graph regularization approach that allows a deeper understanding of the impact of incorporating graph information between labels.
Our approach improves the performance of strong base learners by up to 2% on Mini-ImageNet and 6.7% on ImageNet-FS.
arXiv Detail & Related papers (2021-02-14T05:28:13Z) - Multilayer Clustered Graph Learning [66.94201299553336]
We use contrastive loss as a data fidelity term, in order to properly aggregate the observed layers into a representative graph.
Experiments show that our method leads to a clustered clusters w.r.t.
We learn a clustering algorithm for solving clustering problems.
arXiv Detail & Related papers (2020-10-29T09:58:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.