Self-supervised Representation Learning on Electronic Health Records
with Graph Kernel Infomax
- URL: http://arxiv.org/abs/2209.00655v2
- Date: Tue, 20 Feb 2024 23:36:08 GMT
- Title: Self-supervised Representation Learning on Electronic Health Records
with Graph Kernel Infomax
- Authors: Hao-Ren Yao, Nairen Cao, Katina Russell, Der-Chen Chang, Ophir
Frieder, Jeremy Fineman
- Abstract summary: We propose Graph Kernel Infomax, a self-supervised graph kernel learning approach on the graphical representation of EHR.
Unlike the state-of-the-art, we do not change the graph structure to construct augmented views.
Our approach yields performance on clinical downstream tasks that exceeds the state-of-the-art.
- Score: 4.133378723518227
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning Electronic Health Records (EHRs) representation is a preeminent yet
under-discovered research topic. It benefits various clinical decision support
applications, e.g., medication outcome prediction or patient similarity search.
Current approaches focus on task-specific label supervision on vectorized
sequential EHR, which is not applicable to large-scale unsupervised scenarios.
Recently, contrastive learning shows great success on self-supervised
representation learning problems. However, complex temporality often degrades
the performance. We propose Graph Kernel Infomax, a self-supervised graph
kernel learning approach on the graphical representation of EHR, to overcome
the previous problems. Unlike the state-of-the-art, we do not change the graph
structure to construct augmented views. Instead, we use Kernel Subspace
Augmentation to embed nodes into two geometrically different manifold views.
The entire framework is trained by contrasting nodes and graph representations
on those two manifold views through the commonly used contrastive objectives.
Empirically, using publicly available benchmark EHR datasets, our approach
yields performance on clinical downstream tasks that exceeds the
state-of-the-art. Theoretically, the variation on distance metrics naturally
creates different views as data augmentation without changing graph structures.
Related papers
- Encoding Surgical Videos as Latent Spatiotemporal Graphs for Object and
Anatomy-Driven Reasoning [2.9724186623561435]
We use latent graphs to represent a surgical video in terms of the constituent anatomical structures and tools over time.
We introduce a novel graph-editing module that incorporates prior knowledge temporal coherence to correct errors in the graph.
arXiv Detail & Related papers (2023-12-11T20:42:27Z) - Dynamic Graph Enhanced Contrastive Learning for Chest X-ray Report
Generation [92.73584302508907]
We propose a knowledge graph with Dynamic structure and nodes to facilitate medical report generation with Contrastive Learning.
In detail, the fundamental structure of our graph is pre-constructed from general knowledge.
Each image feature is integrated with its very own updated graph before being fed into the decoder module for report generation.
arXiv Detail & Related papers (2023-03-18T03:53:43Z) - GraphMAE: Self-Supervised Masked Graph Autoencoders [52.06140191214428]
We present a masked graph autoencoder GraphMAE that mitigates issues for generative self-supervised graph learning.
We conduct extensive experiments on 21 public datasets for three different graph learning tasks.
The results manifest that GraphMAE--a simple graph autoencoder with our careful designs--can consistently generate outperformance over both contrastive and generative state-of-the-art baselines.
arXiv Detail & Related papers (2022-05-22T11:57:08Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - Self-supervised Consensus Representation Learning for Attributed Graph [15.729417511103602]
We introduce self-supervised learning mechanism to graph representation learning.
We propose a novel Self-supervised Consensus Representation Learning framework.
Our proposed SCRL method treats graph from two perspectives: topology graph and feature graph.
arXiv Detail & Related papers (2021-08-10T07:53:09Z) - Iterative Graph Self-Distillation [161.04351580382078]
We propose a novel unsupervised graph learning paradigm called Iterative Graph Self-Distillation (IGSD)
IGSD iteratively performs the teacher-student distillation with graph augmentations.
We show that we achieve significant and consistent performance gain on various graph datasets in both unsupervised and semi-supervised settings.
arXiv Detail & Related papers (2020-10-23T18:37:06Z) - Graph Contrastive Learning with Augmentations [109.23158429991298]
We propose a graph contrastive learning (GraphCL) framework for learning unsupervised representations of graph data.
We show that our framework can produce graph representations of similar or better generalizability, transferrability, and robustness compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-10-22T20:13:43Z) - Cross-Global Attention Graph Kernel Network Prediction of Drug
Prescription [5.132187039529859]
We present an end-to-end, interpretable, deep-learning architecture to learn a graph kernel that predicts the outcome of chronic disease drug prescription.
arXiv Detail & Related papers (2020-08-04T22:36:46Z) - Latent-Graph Learning for Disease Prediction [44.26665239213658]
We show that it is possible to learn a single, optimal graph towards the GCN's downstream task of disease classification.
Unlike commonly employed spectral GCN approaches, our GCN is spatial and inductive, and can thus infer previously unseen patients as well.
arXiv Detail & Related papers (2020-03-27T08:18:01Z) - Unsupervised Graph Embedding via Adaptive Graph Learning [85.28555417981063]
Graph autoencoders (GAEs) are powerful tools in representation learning for graph embedding.
In this paper, two novel unsupervised graph embedding methods, unsupervised graph embedding via adaptive graph learning (BAGE) and unsupervised graph embedding via variational adaptive graph learning (VBAGE) are proposed.
Experimental studies on several datasets validate our design and demonstrate that our methods outperform baselines by a wide margin in node clustering, node classification, and graph visualization tasks.
arXiv Detail & Related papers (2020-03-10T02:33:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.