Mining Electronic Health Records (EHRs) becomes a promising topic because of
the rich information they contain. By learning from EHRs, machine learning
models can be built to help human experts to make medical decisions and thus
improve healthcare quality. Recently, many models based on sequential or graph
models are proposed to achieve this goal. EHRs contain multiple entities and
relations and can be viewed as a heterogeneous graph. However, previous studies
ignore the heterogeneity in EHRs. On the other hand, current heterogeneous
graph neural networks cannot be simply used on an EHR graph because of the
existence of hub nodes in it. To address this issue, we propose Heterogeneous
Similarity Graph Neural Network (HSGNN) analyze EHRs with a novel heterogeneous
GNN. Our framework consists of two parts: one is a preprocessing method and the
other is an end-to-end GNN. The preprocessing method normalizes edges and
splits the EHR graph into multiple homogeneous graphs while each homogeneous
graph contains partial information of the original EHR graph. The GNN takes all
homogeneous graphs as input and fuses all of them into one graph to make a
prediction. Experimental results show that HSGNN outperforms other baselines in
the diagnosis prediction task.
The preprocessing method normalizes edges and splits the EHR graph into multiple homogeneous graphs while each homogeneous graph contains partial information of the original EHR graph.
The GNN takes all homogeneous graphs as input and fuses all of them into one graph to make prediction.
GNNは全ての同質グラフを入力として取り、それら全てを1つのグラフに融合して予測する。
0.70
Experimental results show that HSGNN outperforms other baselines in the diagnosis prediction task.
実験の結果,HSGNNは診断予測タスクにおいて,他のベースラインよりも優れていた。
0.60
I. INTRODUCTION The accumulation of large-scale Electronic Health Records (EHRs) provides us with great opportunity of deep learning applications on healthcare.
I 導入 大規模なElectronic Health Records(EHR)の蓄積は、ヘルスケアにディープラーニングを適用する大きな機会を与えてくれます。
0.55
Recently, many deep learning models have been applied to medical tasks such as phenotyping [1], [2], medical predictive modeling [3], [4] and medication recommendation [5].
Generally, raw EHRs consist of multiple kinds of features of patients, including demographics, observations, diagnoses, medications, and procedures ordered by time.
一般的に、生の EHR は、人口統計、観察、診断、薬物、時間順の処置など、患者の様々な特徴から構成される。
0.65
For example, Fig.1 shows an example of an EHR graph with two patients and three visit records.
例えば、図1は、2人の患者と3人の訪問記録を持つEHRグラフの例を示している。
0.74
In Fig 1, there are two patients p1 and p2, where p1 has visited the medical provider twice and p2 has visited once (with timestamp recorded).
図1には、p1とp2の2つの患者があり、p1が2度、p2が1度、タイムスタンプが記録されている。
0.70
During the visit some diagnoses or medications may occur to the patient.
訪問中、患者にいくつかの診断や薬が生じることがある。
0.66
All medical concepts such as diagnosis, medications and procedures are medical codes and scientists can easily track them through some medical ontology.
Moreover, because of the variety of medical codes and their relations, EHR can be viewed as a heterogeneous graph with multiple types of nodes and edges.
For example, the prescriptions in EHRs can help make medication recommendations [5], and the phenotypes of patients indicate the distribution of cohorts [6].
例えば、EHRの処方薬は[5]を処方するのに役立ち、患者の表現型はコホートの分布を示す[6]。
0.68
With Artificial Intelligence (AI) technologies, scientists can build applications to provide useful suggestions to doctors, or let patients understand their physical conditions better.
To address these issues, some other approaches take EHR as a graph shown in Fig 1, and then use graph neural networks (GNNs) to learn embedding vectors for each node [14], [15], [16], [17].
MiME [17] learns multi-level representations of medical codes based on EHR data in a hierarchical order.
MiME[17]はEHRデータに基づく医療コードのマルチレベル表現を階層的に学習する。
0.80
Graph convolutional transformer (GCT) [15] learns the medical representations together with the hidden causal structure of EHR using the “pre-training&fine-tuning” procedure.
Compared with sequential models, these graphbased models are more robust to insufficient data because of the use of structural information: the model can use neighbor information to complete missing entries in the dataset.
Some studies [24] indicate the reason of over-smoothing is the existence of noise in the graph, which can be supported in our case: since gender is not the most informative attribute of a patient (containing too much noise), introducing it into the graph does not always helpful to the prediction task.
It consists of two parts: the preprocessing step and the end-to-end model.
これは前処理ステップとエンドツーエンドモデルという2つの部分から構成される。
0.64
In the preprocessing step, we first construct the heterogeneous EHR graph, and then split it into multiple homogeneous subgraphs according to the weight assigned to each edge.
By doing so, we eliminate the noise in the original heterogeneous graph while preserving its structural information.
これにより、構造情報を保存しながら、元の異種グラフのノイズを除去する。
0.73
After preprocessing step, each subgraph contains
前処理ステップの後、各サブグラフは
0.65
partial information of the original graph.
元のグラフの部分的な情報です
0.78
Then in the end-toend model, we try to combine all subgraphs together into one integrated homogeneous graph Ameta so that it can be input into any general GNN layers to make downstream predictions.
Compared with previous models, HSGNN has these innovations: First, to the best of our knowledge, this is the first study that uses heterogeneous graph structure to represent EHR data, which can preserve the most information.
• We use the similarity subgraphs generated from original heterogeneous graph as input, which is shown effective to improve the performance of prediction.
• 原異種グラフから生成された類似部分グラフを入力とし,予測性能の向上に有効であることを示す。
0.82
• We propose an end-to-end model that can jointly learn high-quality graph embeddings based on similarity subgraphs and make accurate predictions .
Currently, Graph Neural Networks (GNNs) have been widely explored to process graph-structure data.
現在、グラフ構造データを処理するためにグラフニューラルネットワーク(GNN)が広く研究されている。
0.72
Motivated by convolutional neural networks, Bruna et al [25] propose graph convolutions in spectral domain.
畳み込みニューラルネットワークに動機づけられたbrunaら[25]は、スペクトル領域のグラフ畳み込みを提案する。 訳抜け防止モード: 畳み込みニューラルネットワークによるモチベーション Bruna et al [ 25 ] はスペクトル領域におけるグラフ畳み込みを提案する。
0.74
Then, Kipf and Welling
そしてKipfとWelling
0.43
英語(論文から抽出)
日本語訳
スコア
[26] simplified the previous graph convolution operation and designed a Graph Convolutional Network (GCN) model.
[26]従来のグラフ畳み込み操作を単純化し,グラフ畳み込みネットワーク(gcn)モデルを設計した。
0.80
Besides, to inductively generate node embeddings, Hamilton et al propose the GraphSAGE [27] model to learn node embeddings with sampling and aggregation functions.
MiME [17] and GCT [15] assume that there are some latent causal relations between different kinds of medical codes in EHR.
MiME [17] と GCT [15] は, EHR には様々な種類の医療基準の間に因果関係があることを仮定している。
0.81
Based on this assumption, MiME learns multilevel representations in a hierarchical order and GCT can jointly learn the hidden causal structure of EHR while performing predictions.
In GNNs, the existence of these highly visible nodes is one reason of over-smoothing because they can result in multiple nodes having similar embeddings.
III. METHODS HSGNN consists of two parts: one is preprocessing step that splits the heterogeneous graph into multiple subgraphs; and the other is an end-to-end graph neural network that takes multiple graphs as input.
Therefore, we introduce metapath to process the heterogeneous graph and then calculate similarities between nodes along with each meta-path.
そこで,異種グラフの処理にメタパスを導入し,各メタパスとノード間の類似性を計算する。
0.77
英語(論文から抽出)
日本語訳
スコア
Fig. 2: The proposed HSGNN framework.
フィギュア。 2:HSGNNフレームワークの提案。
0.55
Heterogeneous EHR graph is preprocessed by calculating SPS along with each meta-path (the dash box) and then input into the end-to-end model (the solid box).
Here we take meta-path V-D-V as an example to explain SPS.
ここでは、SPSを説明する例としてメタパスV-D-Vを挙げる。
0.56
The 1st and 2nd visits of patient 1 have one common diagnosis in total, and therefore the numerator of similarity between them is 1*2=2.
第1患者と第2患者は、合計で1つの共通の診断を受けており、両者の類似度は1*2=2である。
0.80
Besides, they have 4 diagnosis neighbors in total, and thus the denominator is 4.
さらに,4つの診断近傍があり,分母は4である。
0.62
The similarity of these two nodes along with meta-path V-D-V is 1/2.
これら2つのノードとメタパスV-D-Vの類似性は1/2である。
0.56
a) Heterogeneous EHR Graph: As shown in left part of Fig 1, a heterogeneous EHR graph consists of medical information from all patients.
a)異種EHRグラフ:図1の左側に示すように、異種EHRグラフは、全患者の医療情報からなる。
0.75
There are four kinds of nodes in the graph, patient c, visit v, diagnosis d and medication m. Formally, we use S = C + V + D + M to represent the set of all nodes in the graph, where C, V , D and M correspond to sets of patients, visits, diagnoses and medications.
正式には、我々は、S = C + V + D + M を用いてグラフ内の全てのノードの集合を表し、C, V , D および M は、患者、訪問、診断、薬品の集合に対応する。 訳抜け防止モード: グラフには4種類のノード、患者cがある。 グラフ内のすべてのノードの集合を表すために、S = C + V + D + M を用いる。 C、V、D、Mは患者のセット、訪問、診断、医薬品に対応している。
0.64
For each node n ∈ S, we also define a mapping φ(n) ∈ {“C”, “V ”, “D”.“M ”} to find its type.
各ノード n ∈ s に対して、その型を見つけるための写像 φ(n) ∈ {{c}, “v ”, “d”.[m }} を定義する。
0.66
b) Meta-path: A meta-path p = t1t2 ··· tn is a sequence where t ∈ {“C”, “V ”, “D”.“M ”}.
b) メタパス: メタパス p = t1t2 ··· tn は t ∈ {“C”, “V ”, “D”.“M ”} の列である。
0.84
It can represent a pattern of node types in a given path.
与えられたパスでノードタイプのパターンを表現することができる。
0.84
For example, a meta-path ”V DV ” denotes the pattern of “visit node - diagnosis nodevisit node” in the heterogeneous graph, and the path “patient 1’s 1st visit - headache - patient 2’s 1st visit” is an instance of this meta-path.
Inspired by [21], we propose the symmetric PathSim (SPS) used to measure the similarity of a node pair (ni, nj) under a specific meta-path p in the heterogeneous graph.
21] に触発されて,不均質グラフにおける特定のメタパス p の下でのノード対 (ni, nj) の類似性を測定するための対称パスsim (sps) を提案する。
0.82
SP Sp(ni, nj) =
SP Sp(ni, nj) =
0.85
P Cp(ni, nj) + P Cp(nj, ni) P Cp(ni, ni) + P Cp(nj, nj)
P Cp(ni, nj) + P Cp(nj, ni) P Cp(ni, ni) + P Cp(nj, nj)
0.85
. (1) Basically, when the P C between two nodes is higher, these two nodes tend to have a stronger relation.
. (1) 基本的に、2つのノード間のPCが高い場合、2つのノードはより強い関係を持つ傾向がある。
0.79
However, some nodes may have higher degree but are less important.
しかし、いくつかのノードは、より高い次数を持つが、重要度は低い。
0.51
For example, a node denoting gender “female” may link to half of the patient nodes in the graph, but the effect of gender on medication is much less than the effect of diagnosis.
To eliminate the influence of nodes with high visibility (degree) and low importance, SPS normalizes the P C with the sum of ni and nj’s self loop count.
へ 可視性(度数)が高く重要度の低いノードの影響を取り除き、spsはp cをniとnjの自己ループ数の総和で正規化する。 訳抜け防止モード: へ 高い可視性(程度)と低い重要性を持つノードの影響を排除します。 SPS は ni と nj の自己ループ数 の和で P C を正規化する。
0.70
SPS is symmetric, which means SP Sp(ni, nj) = SP Sp(nj, ni).
SPS は対称であり、SP Sp(ni, nj) = SP Sp(nj, ni) を意味する。
0.88
In the preprocessing step, we construct the heterogeneous EHR graph and calculate the similarities of all node pairs under a group of meta-paths P = {p1, p2,··· pK} (the similarity of two nodes is set to 0 if their node types are not applicable to the mata-path).
前処理ステップでは、不均一な EHR グラフを構築し、メタパス P = {p1, p2,··· pK} の群の下で全てのノードペアの類似性を計算する(ノードタイプがマタパスに当てはまらない場合、2つのノードの類似性は 0 に設定される)。
0.85
After this step, we can obtain a series of symmetric similarity matrices A = {A1, A2,··· , AK} where K is both the number of meta-paths and the number of similarity matrices.
このステップの後、一連の対称類似度行列 a = {a1, a2,···· , ak} を得ることができ、k はメタパスの数と類似度行列の数の両方である。
0.76
The size of each matrix Ai in A is N × N, where N = |S| is the number of nodes.
A における各行列 Ai のサイズは N × N であり、N = |S| はノードの数である。
0.89
In this way, the heterogeneous graph is split into multiple homogeneous graphs and each homogeneous graph contains partial information of the original graph.
このように、不均一グラフは複数の同次グラフに分割され、各同次グラフは元のグラフの部分情報を含む。
0.78
B. Heterogeneous Similarity Graph Neural Network
B.異種類似グラフニューラルネットワーク
0.80
The solid box in Fig 2 shows the architecture of our proposed HSGNN.
図2のソリッドボックスは、提案したHSGNNのアーキテクチャを示しています。
0.66
The preprocessing step derives multiple homogeneous graphs with meta-path and then we take them as inputs of HSGNN.
前処理ステップはメタパスを持つ複数の同質グラフを導出し,HSGNNの入力として利用する。
0.70
The primary goal of HSGNN is to fuse the homogeneous graphs into one graph Ameta containing true relations between each node pair.
To achieve this goal, suppose the initial node feature matrix is F and the K input graphs are A = {A1, A2,··· , AK}, here we propose several variants of HSGNN.
この目的を達成するために、初期ノード特徴行列が F であり、K の入力グラフが A = {A1, A2,·· , AK} であると仮定する。
0.65
1) Simply Weighted Sum: A straightforward approach is to
1) 単純な重み付け Sum: 簡単なアプローチは
0.75
use weighted sum:
重み付き和を使用する:
0.46
K(cid:88) k=1
k(cid:88) k=1
0.69
Ameta = wkAk (2)
アメタ= wkAk (2)
0.79
Heterogeneous EHR Graph Patient Visit Diagnosis Medicationmeta-pathV DVmeta-pathMVmeta-pa thP2/42/32/34/42/42/ 3Similarity Subgraph Construction via Meta-pathSymmetric PathSim……HSGNNPreprocessingEn d-to-end ModelInputDMV2/32/5P atient 1Patient 21st visit2nd visit1st visitBenzodiazepines InsomniaPalpitationH eadacheIbuprofenSimi larity SubgraphsNode Feature…A1 A2 An meta-GNN2 meta-GNN1 meta-GNNn ……GNNŷLearn an new graphbased on all subgraphsAttentionA1 A2 …An Attention
hsgnnpreprocessingen d-to-end modelinputdmv2/32/5p atient 1patient 21st visit2nd visit1st visitbenzodiazepines inomiumpalpitationhe adacheibuprofen similarity subgraphsnode feature...a1 a2 an meta-gnn2 meta-gnn1 meta-gnnn ......gnn-learn an new graph based on all subgraphsattentiona1 a2 ...an attention (英語)
0.34
英語(論文から抽出)
日本語訳
スコア
Fig. 3: A dissection of Ameta.
フィギュア。 3: ametaの解剖。
0.52
(cid:80)K where wk is a trainable scalar weight of matrix Ai and k=1 wk = 1.
For example, to predict the condition of a patient, doctors may rely on different medical codes when making decisions.
例えば、患者の状態を予測するために、医師は意思決定時に異なる医療コードに依存することがある。
0.75
Since medical codes correspond to different meta-paths, we need to adjust weight scalars on each node pair.
医療コードは異なるメタパスに対応するため、各ノード対の重みスカラーを調整する必要がある。
0.70
2) Attention Sum: We have a node feature matrix F as the input, and it can help us learn the proper weights of each graph.
2) 注意和: 入力としてノード特徴行列fがあり、各グラフの適切な重み付けを学ぶのに役立ちます。
0.72
Since we want to assign a unique weight for each node pair under each meta-path, the weight tensor can be denoted as W ∈ [0, 1]K×N×N and each element wkij in it means the attention weight under node pair (ni, nj) on the k-th meta-
Ωatt = {ω1; ω2;··· ; ωK} is the parameter set of the neural network.
Ωatt = {ω1; ω2;··· ; ωK} はニューラルネットワークのパラメータ集合である。
0.82
After obtaining wkij, we can get Ameta:
wkijを得た後、Ametaを入手できます。
0.70
Ameta = Wk ◦ Ak
アメタ= Wk (複数形 Wks)
0.47
(4) where Wk means the k-th N×N matrix in W and ◦ means
(4) ここで Wk は W の k 番目の N×N 行列であり、
0.78
element-wise multiplication.
element-wise multiplication
0.83
This equation adjusts personalized weights for different node pairs based on node features.
この方程式は、ノードの特徴に基づいて異なるノードペアのパーソナライズされた重み付けを調整する。
0.55
However, this approach fails to improve the performance in the experiments.
しかし、このアプローチは実験のパフォーマンスを改善するには至らなかった。
0.68
The reason is that, the node feature F we use in the experiments is not informative, and thus it can introduce noise into the model, and prevent it from learning meaningful attention weights.
To address this issue, we need to let the node features firstly learn
この問題に対処するためには、まずノード機能を学習させなければなりません。
0.56
K(cid:88) k=1
k(cid:88) k=1
0.69
from A, and then use them to generate meaningful attention weights.
Aから、そしてそれを使って意味のある注意重みを生成します。
0.59
3) Aggregated Attention Sum: After learning from A to obtain a more informative node feature matrix Fmeta, we use Fmeta to generate the attention weights of graph aggregation.
3) Aから学習し,より情報性の高いノード特徴行列Fmetaを得ると,Fmetaを用いてグラフ集約の注意重みを生成する。 訳抜け防止モード: 3) Aggregated Attention Sum : After Aから学び、より情報性の高いノード特徴行列Fmetaを得る グラフ集約の注意重みを生成するためにFmetaを使用します。
0.87
Motivated from [18], in this step we apply GNN on each graph to obtain multiple features for each node.
このステップでは、各グラフにGNNを適用して、各ノードの複数の機能を取得します。
0.70
Formally, for k ∈ {1, 2,··· , K} we have:
形式的には、k ∈ {1, 2,··· , k} に対して:
0.78
F (0) k = meta GN Nk(F , Ak)
F (0) k = meta GN Nk(F , Ak)
0.85
(5) where meta GN N can be any kind of GNN layers.
(5) メタGN Nは任意の種類のGNNレイヤとなる。
0.80
In the next step, to learn the node feature matrix Fmeta, we use
では 次のステップはnode feature matrix fmetaを学習するために
0.67
Fmeta = AGGREGAT ORF ([F (0) (6) where AGGREGAT ORF is the aggregation function, which can be Graph Attention Network (GAT) [33].
However, there is a special characteristic of EHR graph: the number of all medical code nodes, such as diagnosis node and medication node, keep almost constant in all EHR graphs.
Raw MIMIC-III data consists of 17 tables, including demographics, laboratory test results, microbiology test results, diagnoses, medications, procedures, medical notes, etc.
raw mimic-iiiデータは、人口統計、検査結果、微生物検査結果、診断、医薬品、処置、医療記録など17の表からなる。
0.67
For each patient and visit, there is a unique ID to track its corresponding information through tables.
患者と訪問ごとに、対応する情報をテーブルを通して追跡するユニークなIDがある。
0.83
There are extra tables recording the patient-visit relations, demographics and data dictionaries as well.
患者とビジターの関係、人口統計、データ辞書などの表も追加されている。
0.71
To build a clean and efficient heterogeneous graph based on these data, we mainly do the following things.
a) data disambiguation: There are more than 1000 kinds of medications in the original dataset.
a) データの曖昧さ: 元のデータセットには1000種類以上の薬物がある。
0.83
Most of them are different abbreviations or preparations of the same medicine.
ほとんどが、同じ薬の異なる略語または調合である。
0.63
In the experiment, we disambiguate these medicines by comparing the most common strings in the name of medications and finally extract 304 most common medications.
(Medical Subject Headings)1 to extract meaningful structural diagnostic information from free text.
(医学的主語見出し)1 自由テキストから有意義な構造診断情報を抽出する。
0.83
We extract entries “medications on admission”, “family history”, “impression”, “chief complaint”, “physical examination on admission” and “history” from the medical notes, and then match words in these entries to MeSH.
International Classification of Diseases (ICD) is a medical ontology which is widely used in healthcare.
International Classification of Diseases (ICD) は医学のオントロジーであり、医療で広く使われている。
0.83
In these ontologies, diagnoses and procedures are organized in hierarchical structures and the first several digits denote a high-level concept of the codes.
Dipole uses bidirectional recurrent neural networks and attention mechanism to make predictions.
Dipoleは双方向のリカレントニューラルネットワークとアテンションメカニズムを使って予測を行う。
0.70
In this experiment, we use patient conditions at different times in one visit to make the visit-level prediction, and use information of different visits to make patient-level prediction.
We use the same graph structure and meta-paths on this model as HSGNN.
このモデルではHSGNNと同じグラフ構造とメタパスを用いる。
0.75
• HetGNN [19].
・HetGNN[19]。
0.53
HetGNN is a heterogeneous graph neural network that introduces a random walk to sample a fixed size of heterogeneous neighbors and leverages a neural network architecture with two modules to aggregate feature information of those sampled neighboring nodes
• HAN [18]. HAN is a heterogeneous graph neural network based on hierarchical attention, including node-level and semantic-level attentions to learn the importance between a node and its metapath based neighbors and the importance of different meta-paths.
5 9) and aggregated attentional sum to derive Ameta (Eq.
59) であり, ameta (eq) を導出するために注意和を集計した。
0.48
7 4). Then a onelayer GCN is applied on Fmeta and Ameta to make final predictions.
7 4). 次に、FmetaとAmetaに一層GCNを適用して最終的な予測を行う。
0.80
• simi-HSGNN Use P athCount but not SPS to derive A.
•シミ-HSGNNはP athCountを使用するが、SPSはAを誘導しない。
0.55
This is to show the efficiency of SPS.
これはSPSの効率を示すためである。
0.80
• sum-HSGNN Use simply weighted sum to derive Amate (Eq.
sum-HSGNN 単に加重和を使ってAmate (Eq) を導出する。
0.62
2). Then a one-layer GCN is applied on F and
2). 次に、Fに一層GCNを印加し、
0.78
英語(論文から抽出)
日本語訳
スコア
Visit-level precision. Visit-level time.
視察レベルの精度。 訪問時間。
0.58
Patient-level precision. Patient-level time.
患者レベルの精度。 患者レベルの時間。
0.65
Fig. 5: Precision and running time of Quick Inference compare with traditional train and test procedure.
フィギュア。 5: クイック推論の精度と実行時間を従来の列車とテスト手順と比較する。
0.65
(Dark green denotes training,
(ダークグリーンは訓練を意味する。
0.58
brown denotes testing and purple denoted quick reference.)
ブラウンはテストを示し、紫はクイックリファレンスを表す。
0.62
Ameta to make final predictions.
最終予測を行うためのameta。
0.78
This is to compare HSGNN with a simpler model to show the efficiency of splitting EHR graph into multiple subgraphs.
これは、HSGNNと単純なモデルを比較して、EHRグラフを複数のサブグラフに分割する効率を示す。
0.78
• HSGNN-m Use mean aggregator to derive Amate.
• hsgnn-mは平均アグリゲータを使用してamateを導出する。
0.43
Other settings are the same as HSGNN.
他の設定はHSGNNと同じです。
0.79
This variant is to show the effect of different aggregation functions.
この変種は、異なる集約関数の効果を示すことである。
0.73
C. Problem Introduction Diagnosis prediction can be viewed as a multi-label classification problem where we try to predict multiple possible diagnoses for the patients or visits.
c. 問題序説 診断予測は多ラベル分類問題と見ることができ、患者や訪問者に対する複数の診断の可能性を予測する。
0.80
We conduct both patient level prediction and visit level prediction on the dataset.
患者レベルの予測と訪問レベルの予測をデータセット上で行う。
0.79
As for patient level prediction, only diagnoses existing on all visits of this patient would be counted as the diangosis of the patient.
患者レベルの予測については,すべての訪問で存在する診断のみを患者のジアンギスとしてカウントする。
0.78
We then split training and testing set by removing the corresponding “visit-diagnosis ” edges in the graph.
Then, since medication and procedure can be determined by diagnosis, these edges are also removed to prevent data leakage.
そして、診断によって薬品や処置を判定できるので、これらのエッジも除去してデータの漏洩を防止する。
0.67
D. Experiment Settings In the experiment, we use the concatenation of feature vectors from different sources as the features of the visits, and then we use them for all baseline models.
In the table, HSGNN and its variant HSGNN-m. outperform all other baselines.
表では、HSGNNとその変種HSGNN-m.は、他のすべてのベースラインを上回っている。
0.56
We conduct diagnosis prediction task the MIMIC-III dataset.
診断予測タスクをMIMIC-IIIデータセットで行う。
0.72
Generally, there are about 10 diagnoses for each visit and 4 visits for each patient.
一般に、訪問ごとに約10回の診断があり、患者は4回診察される。
0.74
Therefore, when k increases, the precision may either increase or decrease.
したがって、kが増加すると、精度は増加するか減少する。
0.72
The accuracy of a model approximately reach its maximum when k = 10 for patient diagnosis prediction and k = 15 for visit level
患者の診断予測に k = 10 、訪問レベルに k = 15 のとき、モデルの精度がほぼ最大に達する。
0.85
prediction. This is also why we choose maximum k = 20.
予測だ これが最大 k = 20 を選ぶ理由でもある。
0.76
Therefore, if we focus on the column of k = 15 of the visitlevel prediction and k = 10 of patient-level prediction, we can find HSGNN improve 0.7% and 1.4% on both tasks.
したがって、訪問レベルの予測の k = 15 と患者レベルの予測の k = 10 の列に注目すると、HSGNN は両方のタスクで 0.7% と 1.4% 改善している。
0.81
All baselines, together with HSGNN can be classified into three categories: RNN models, homogeneous graph models and heterogeneous graph models.
From the results we can infer that homogeneous graph models (KAME and GCT) perform better than RNN models (Dipole), and heterogeneous graph approaches (MAGNN and HSGNN) perform better than homogeneous approaches.
Therefore, we can infer that compared with using the original input graph, a virtual graph constructed in the model can improve the performance of GNN.
Simi-HSGNN performs worse than HSGNN for around 2% on both tasks, showing that using normalize similarity measure SPS is an essential way to achieve better results.
Nevertheless, the performance of sum-HSGNN doesn’t
それでもSum-HSGNNのパフォーマンスはそうではない
0.71
80%70%60%Training set percentage a%0.700.720.740.760. 780.80Precision@1080 %70%60%Training set percentage a%5.07.510.012.515.0 17.520.0Testing time in sec.80%70%60%Trainin g set percentage a%0.600.650.700.750. 80Precision@1080%70% 60%Training set percentage a%5.07.510.012.515.0 17.520.0Testing time in sec.
80%70%60%Training set percentage a%0.700.720.740.780. 80Precision@1080%70% 60%Training set percentage a%5.07.510.012.515.0 17.520.0Testing time in sec.80%70%Training set percentage a%0.600.650.700.750. 80Precision@1080%70% 60%Training set percentage a%5.07.510.012.515.0 17.520.0Testing time in sec.
0.61
英語(論文から抽出)
日本語訳
スコア
DeepWalk. metapath2vec.
DeepWalk Metapath2vec
0.63
GRAM. HSGNN.
GRAM。 HSGNN
0.66
Fig. 6: T-SNE scatterplots of diagnoses trained by HSGNN, DeepWalk, metapath2vec and GRAM.
HSGNN-mm shows the impact of different node aggregators on the model performance.
HSGNN-mmは、異なるノードアグリゲータがモデルの性能に与える影響を示す。
0.67
However, we discover the influence of aggregators is limited if the size of embeddings are kept constant.
しかし,埋め込みの大きさが一定に保たれた場合,アグリゲータの影響は限定的である。
0.67
Therefore, we choose the mean aggregator, the one which is easier to implement and can achieve satisfactory performance, to be compared in the experiments.
そこで本研究では,実装が容易で,良好な性能を実現することのできる平均アグリゲータを実験で比較する。
0.67
F. Performance of Quick Inference
F.クイック推論の性能
0.88
To compare the efficiency and the effectiveness of our quick inference method (III.
高速推論法(III)の有効性と効率を比較検討した。
0.75
C) to traditional testing step, we design the following experiment to evaluate its performance.
C) 従来のテストステップに対して,パフォーマンスを評価するために以下の実験を設計する。
0.76
Firstly, we choose a% of data randomly from the dataset as training and validation samples.
まず、トレーニングと検証のサンプルとしてデータセットからデータの1%をランダムに選択する。
0.74
Then we split the remaining 1 − a% samples equally for traditional testing and quick inference.
その後、従来のテストと迅速な推論のために、残りの1-a%のサンプルを等しく分割しました。
0.58
Secondly, in the preprocessing step, both training samples and testing samples are used to generate the graph.
次に、前処理ステップでは、トレーニングサンプルとテストサンプルの両方を使用してグラフを生成する。
0.76
Then this graph is fed forward to our model.
そして、このグラフは私たちのモデルに送られます。
0.68
Finally, when the model is well-trained, we use the quick inference method to predict the remaining (1− a%)/2 samples, and compare its precision and running time to the traditional testing procedure.
In this experiment, we set a = 80%, 70%, 60% respectively.
この実験では,それぞれa = 80%,70%,60%を設定した。
0.78
Fig. 5 shows the result of training performance, testing performance and the quick inference performance under visitlevel and patient-level prediction.
With a% decreasing, all the training, testing and quick inference precision decreases.
a%の減少に伴い、トレーニング、テスト、迅速な推論精度は低下する。
0.74
It is because of the lack of
それは不足のためです
0.58
training samples, making the model under-fitting.
サンプルをトレーニングし、モデルを不適合にする。
0.58
On the other hand, the decrease of training samples means the number of testing samples and quick inferences are increasing.
一方、トレーニングサンプルの減少は、テストサンプルの数の増加と迅速な推論の増加を意味する。
0.73
Therefore, the number of inference is increasing.
そのため、推測数が増加している。
0.72
G. Representation Learning with External Knowledge
G.外部知識を用いた表現学習
0.73
HSGNN can learn representations for nodes.
HSGNNはノードの表現を学ぶことができる。
0.63
Since many models such as GRAM can learn high quality representations by integrating medical ontologies, we try to test the ability of HSGNN to learn informative representations on the same task.
Here are we choose nine categories in ICD-9 ontology to build the graph.
グラフを構築するために、ICD-9オントロジーの9つのカテゴリを選択します。
0.61
Since diagnoses in the same category are directly connected and are more relative to each other, an ideal result is that all diagnosis nodes belong to the same category can form a cluster in visualization.
EHR data is highly heterogeneous with high-dimensional temporal data.
EHRデータは高次元時間データと非常に均一である。
0.72
To model the intrinsic complexity of EHRs and utilize external medical knowledge, we propose HSGNN framework to learn high quality representations while generating predictions.
ACKNOWLEDGMENT The corresponding author is Hao Peng.
承認 著者はHao Peng氏。
0.40
This work is supported by Key Research and Development Project of Hebei Province (No.
この研究は、河北省のキーリサーチ・開発プロジェクトによって支援されている(No.)。
0.62
20310101D), NSFC No.62002007 and No.62073012, and in part by NSF under grants III-1763325, III-1909323, IIS-1763365 and SaTC-1930941.
20310101d, nsfc no.62002007, no.62073012, and part by nsf under grants iii-1763325, iii-1909323, iis-1763365, satc-1930941。
0.68
REFERENCES [1] T. Fu, T. N. Hoang, C. Xiao, and J.
参考 [1] T. Fu, T. N. Hoang, C. Xiao, J。
0.69
Sun, “DDL: deep dictionary learning
Sun, “DDL: Deep Dictionary Learning”
0.90
for predictive phenotyping,” in IJCAI, 2019, pp.
とijcai, 2019, pp.で述べている。
0.45
5857–5863.
5857–5863.
0.71
[2] T. Bai, A. K. Chanda, B. L. Egleston, and S. Vucetic, “Ehr phenotyping via jointly embedding medical concepts and words into a unified vector space,” BMC medical informatics and decision making, vol.
A.K. Chanda, B.L. Egleston, S. Vucetic, “Ehr representationtyping through jointly embedded medical concept and words into a unified vector space”, BMC Medical informatics and decision making, vol。 訳抜け防止モード: [2 ]T. Bai, A. K. Chanda, B. L. Egleston, S. Vucetic, “Ehr representationtyping via 医療の概念と言葉を統合されたベクトル空間に共同で埋め込む。 BMC医療情報学と意思決定
0.86
18, no. 4, p. 123, 2018.
18 だめだ 4p.123, 2018。
0.60
[3] X. S. Zhang, F. Tang, H. H. Dodge, J. Zhou, and F. Wang, “Metapred: Meta-learning for clinical risk prediction with limited patient electronic health records,” in SIGKDD, 2019, pp.
X.S. Zhang, F. Tang, H. H. Dodge, J. Zhou, F. Wang, “Metapred: Meta-learning for Clinical Risk Prediction with limited patient Electronic health records”, SIGKDD, 2019, pp。
0.83
2487–2495.
2487–2495.
0.71
[4] Z. C. Lipton, D. C. Kale, C. Elkan, and R. Wetzel, “Learning to diagnose with lstm recurrent neural networks,” arXiv preprint arXiv:1511.03677, 2015.
4] z. c. lipton, d. c. kale, c. elkan, r. wetzel, “learning to diagnostic with lstm recurrent neural networks”, arxiv preprint arxiv:1511.03677, 2015年。
0.77
[5] J. Shang, C. Xiao, T. Ma, H. Li, and J.
J. Shang, C. Xiao, T. Ma, H. Li, J.
0.70
Sun, “Gamenet: Graph augmented memory networks for recommending medication combination,” in AAAI, 2019, pp.
Sun, “Gamenet: Graph augmented memory network for recommending Drug combination”, AAAI, 2019, pp。
0.71
1126–1133.
1126–1133.
0.71
[6] Z. Che and Y. Liu, “Deep learning solutions to computational phenotyping in health care,” in ICDM Workshops.
Z. Che, Y. Liu, “Deep Learning Solution to compute phenotyping in health care”. ICDM Workshops.
0.69
IEEE Computer Society, 2017, pp.
IEEE Computer Society, 2017。
0.60
1100–1109.
1100–1109.
0.71
[Online]. Available: https://doi.org/10.
[オンライン] 利用可能: https://doi.org/10。
0.59
1109/ICDMW.2017.156
1109/ICDMW.2017.156
0.29
[7] X. Cai, J. Gao, K. Y. Ngiam, B. C. Ooi, Y. Zhang, and X. Yuan, “Medical concept embedding with time-aware attention,” in Proceedings of the 27th International Joint Conference on Artificial Intelligence, 2018, pp.
X. Cai, J. Gao, K. Y. Ngiam, B. C. Ooi, Y. Zhang, X. Yuan, “Medical concept embedded with time-aware attention” in Proceedings of the 27th International Joint Conference on Artificial Intelligence, 2018, pp.
0.90
3984–3990.
3984–3990.
0.71
[8] E. Choi, M. T. Bahadori, E. Searles, C. Coffey, M. Thompson, J. Bost, J. Tejedor-Sojo, and J.
E. Choi, M. T. Bahadori, E. Searles, C. Coffey, M. Thompson, J. Bost, J. Tejedor-Sojo, J.
0.90
Sun, “Multi-layer representation learning for medical concepts,” in SIGKDD, 2016, pp.
sun, “multi-layer representation learning for medical concepts” in sigkdd, 2016 pp. (英語)
0.76
1495–1504.
1495–1504.
0.71
[9] E. Choi, M. T. Bahadori, J.
[9]E. Choi, M. T. Bahadori, J.
0.94
Sun, J. Kulas, A. Schuetz, and W. F. Stewart, “RETAIN: an interpretable predictive model for healthcare using reverse time attention mechanism,” in NeurIPS, 2016, pp.
Sun, J. Kulas, A. Schuetz, W. F. Stewart, “RETAIN: a interpretable predictive model for healthcare using reverse time attention mechanism”. NeurIPS, 2016. pp。
0.83
3504–3512.
3504–3512.
0.71
[10] F. Ma, R. Chitta, J. Zhou, Q.
[10]F. Ma, R. Chitta, J. Zhou, Q.
0.97
You, T. Sun, and J. Gao, “Dipole: Diagnosis prediction in healthcare via attention-based bidirectional recurrent neural networks,” in SIGKDD, 2017, pp.
you, t. sun, and j. gao, “dipole: diagnostic prediction in healthcare via attention-based bidirectional recurrent neural networks” in sigkdd, 2017 pp. (英語)
0.86
1903–1911.
1903–1911.
0.71
[11] Z. C. Lipton, D. C. Kale, C. Elkan, and R. C. Wetzel, “Learning to diagnose with LSTM recurrent neural networks,” in 4th International Conference on Learning Representations, ICLR 2016, 2016.
Z.C. Lipton, D. C. Kale, C. Elkan, R. C. Wetzel, “Learning to diagnose with LSTM recurrent neural network” in 4th International Conference on Learning Representations, ICLR 2016. 2016
0.84
[12] E. Choi, M. T. Bahadori, A. Schuetz, W. F. Stewart, and J.
E. Choi, M. T. Bahadori, A. Schuetz, W. F. Stewart, J.
0.84
Sun, “Doctor AI: predicting clinical events via recurrent neural networks,” in Proceedings of the 1st Machine Learning in Health Care, MLHC 2016, vol.
Sun, “Doctor AI: predicting clinical events via recurrent neural network” in Proceedings of the 1st Machine Learning in Health Care, MLHC 2016 vol。
0.76
56. JMLR.org, 2016, pp.
56. JMLR.org、2016年。
0.81
301–318. [13] M. Aczon, D. Ledbetter, L. V. Ho, A. M. Gunny, A. Flynn, J. Williams, and R. C. Wetzel, “Dynamic mortality risk predictions in pediatric critical care using recurrent neural networks,” CoRR, vol.
301–318. 13] M. Aczon, D. Ledbetter, L. V. Ho, A. M. Gunny, A. Flynn, J. Williams, R. C. Wetzel, “リカレントニューラルネットワークを用いた小児科における死亡リスクの動的予測”。
0.78
abs/1701.06675, 2017.
abs/1701.06675, 2017
0.67
[14] F. Ma, Q.
[14] F. Ma, Q.
0.94
You, H. Xiao, R. Chitta, J. Zhou, and J. Gao, “KAME: knowledge-based attention model for diagnosis prediction in healthcare,” in CIKM, 2018, pp.
H.Xiao, R. Chitta, J. Zhou, J. Gao, “KAME: knowledge-based attention model for diagnosis prediction in medical”, CIKM, 2018, pp。
0.80
743–752. [15] E. Choi, Z. Xu, Y. Li, M. W. Dusenberry, G. Flores, E. Xue, and A. M. Dai, “Learning the graphical structure of electronic health records with graph convolutional transformer,” in Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence.
743–752. E. Choi, Z. Xu, Y. Li, M. W. Dusenberry, G. Flores, E. Xue, A. M. Dai, "グラフ畳み込み変換器による電子健康記録のグラフィカルな構造を学ぶ" Proceedings of the Thirth-Second AAAI Conference on Artificial Intelligence。
0.76
AAAI Press, 2020.
AAAIプレス、2020年。
0.85
[16] E. Choi, M. T. Bahadori, L. Song, W. F. Stewart, and J.
16] E. Choi, M. T. Bahadori, L. Song, W. F. Stewart, J.
0.93
Sun, “GRAM: graph-based attention model for healthcare representation learning,” in KDD, 2017, pp.
sun, “gram: graph-based attention model for healthcare representation learning” in kdd, 2017 pp。
0.74
787–795. [17] E. Choi, C. Xiao, W. F. Stewart, and J.
787–795. [17] e. choi, c. xiao, w. f. stewart, j.
0.71
Sun, “Mime: Multilevel medical embedding of electronic health records for predictive healthcare,” in NIPS 2018, 2018, pp.
Sun, “Mime: Multilevel medical embedded of electronic health records for predictive health” in NIPS 2018, 2018, pp.
0.79
4552–4562.
4552–4562.
0.71
[18] X. Wang, H. Ji, C. Shi, B. Wang, Y. Ye, P. Cui, and P. S. Yu, “Heterogeneous graph attention network,” in The World Wide Web Conference, WWW 2019.
X. Wang, H. Ji, C. Shi, B. Wang, Y. Ye, P. Cui, P. S. Yu, “Heterogeneous graph attention network” in the World Wide Web Conference, WWW 2019。
0.86
ACM, 2019, pp.
acm、2019年、p。
0.67
2022–2032.
2022–2032.
0.71
[19] C. Zhang, D. Song, C. Huang, A. Swami, and N. V. Chawla, “Heterogeneous graph neural network,” in Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining.
C. Zhang, D. Song, C. Huang, A. Swami, N. V. Chawla, “Heterogeneous graph neural network” in Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining。
0.82
ACM, 2019, pp.
acm、2019年、p。
0.67
793–803. [20] X. Fu, J. Zhang, Z. Meng, and I.
793–803. [20]X. Fu, J. Zhang, Z. Meng, I
0.76
King, “MAGNN: metapath aggregated graph neural network for heterogeneous graph embedding,” in WWW ’20: The Web Conference 2020.
king, “magnn: metapath aggregated graph neural network for hetero graph embedded” in www ’20: the web conference 2020 (英語)
0.76
ACM / IW3C2, 2020, pp.
ACM/IW3C2, 2020, pp。
0.79
2331–2341.
2331–2341.
0.71
[21] Y. Sun, J. Han, X. Yan, P. S. Yu, and T. Wu, “Pathsim: Meta path-based top-k similarity search in heterogeneous information networks,” PVLDB, vol.
[21]Y。 Sun, J. Han, X. Yan, P. S. Yu, T. Wu, “Pathsim: Meta path-based top-k similarity search in heterogeneous information network”, PVLDB, vol.
0.85
4, no. 11, pp.
4位はノー。 11、p。
0.67
992–1003, 2011.
992–1003, 2011.
0.84
[22] Y. Shi, P. Chan, H. Zhuang, H. Gui, and J. Han, “Prep: Path-based relevance from a probabilistic perspective in heterogeneous information networks,” in Proceedings of the 23rd ACM SIGKDD.
Y. Shi, P. Chan, H. Zhuang, H. Gui, J. Han, “Prep: Path-based Relevance from a probabilistic perspective from a probageneous information network”, Proceedings of the 23rd ACM SIGKDD.
0.84
ACM, 2017, pp.
acm、2017年、p。
0.72
425–434. [23] Q. Li, Z. Han, and X. Wu, “Deeper insights into graph convolutional networks for semi-supervised learning,” in Proceedings of the ThirtySecond AAAI Conference on Artificial Intelligence.
425–434. Q. Li, Z. Han, and X. Wu, “Deeper insights into graph convolutional network for semi-supervised learning” in Proceedings of the ThirtySecond AAAI Conference on Artificial Intelligence。
0.76
AAAI Press, 2018, pp.
AAAI Press, 2018, pp。
0.79
3538–3545.
3538–3545.
0.71
[24] D. Chen, Y. Lin, W. Li, P. Li, J. Zhou, and X.
[24]D. Chen,Y. Lin,W. Li,P. Li,J. Zhou,X
0.89
Sun, “Measuring and relieving the over-smoothing problem for graph neural networks from the topological view,” in AAAI 2020.
[25] J. Bruna, W. Zaremba, A. Szlam, and Y. LeCun, “Spectral networks and locally connected networks on graphs,” arXiv preprint arXiv:1312.6203, 2013.
J. Bruna, W. Zaremba, A. Szlam, Y. LeCun, “Spectral network and local connected network on graphs” arXiv preprint arXiv:1312.6203, 2013
0.85
[26] T. N. Kipf and M. Welling, “Semi-supervised classification with graph
26] t. n. kipf and m. welling, "グラフを用いた半教師付き分類"
0.64
convolutional networks,” in ICLR.
iclrは曰く、“畳み込みネットワーク”だ。
0.50
OpenReview.net, 2017.
2017年、OpenReview.net。
0.65
[27] W. Hamilton, Z. Ying, and J. Leskovec, “Inductive representation learning on large graphs,” in Advances in neural information processing systems, 2017, pp.
[27] w. hamilton, z. ying, j. leskovec, “inductive representation learning on large graphs” in advances in neural information processing systems, pp. 2017年4月1日閲覧。
0.85
1024–1034.
1024–1034.
0.71
[28] Y. Dou, Z. Liu, L. Sun, Y. Deng, H. Peng, and P. S. Yu, “Enhancing graph neural network-based fraud detectors against camouflaged fraudsters,” in Proceedings of the 29th ACM International Conference on Information & Knowledge Management, 2020, pp.
28] y. dou, z. liu, l. sun, y. deng, h. peng, p. s. yu, “enhancing graph neural network-based fraud detectors against camouflaged fraudsters” in the 29th acm international conference on information & knowledge management, 2020, pp. (英語)
0.78
315–324. [29] H. Peng, J. Li, Q. Gong, Y.
315–324. [29]H. Peng, J. Li, Q. Gong, Y。
0.82
Song, Y. Ning, K. Lai, and P. S. Yu, “Finegrained event categorization with heterogeneous graph convolutional networks,” IJCAI, 2019.
Song, Y. Ning, K. Lai, and P.S. Yu, “Finefine Event categorization with heterogeneous graph convolutional network”. IJCAI, 2019.
0.93
[30] Z. Liu, X. Li, Z.
[30]Z.Liu,X.Li,Z.
0.88
Fan, S. Guo, K. Achan, and P. S. Yu, “Basket recommendation with multi-intent translation graph neural network,” arXiv preprint arXiv:2010.11419, 2020.
Fan, S. Guo, K. Achan, P. S. Yu, “Basket recommendation with multi-intent translation graph neural network” arXiv preprint arXiv:2010.11419, 2020
0.94
[31] X. Li, M. Zhang, S. Wu, Z. Liu, L. Wang, and P. S. Yu, “Dynamic graph
[31] X. Li, M. Zhang, S. Wu, Z. Liu, L. Wang, P. S. Yu, “Dynamic graph”
[32] Y. Gao, L. Xiaoyong, P. Hao, B. Fang, and P. Yu, “Hincti: A cyber threat intelligence modeling and identification system based on heterogeneous information network,” IEEE Transactions on Knowledge and Data Engineering, 2020.
[32] y. gao, l. xiaoyong, p. hao, b. fang, p. yu, “hincti: a cyber threat intelligence modeling and identification system based based on hetero information network” ieee transactions on knowledge and data engineering, 2020”(英語)
0.84
[33] P. Velickovic, G. Cucurull, A. Casanova, A. Romero, P. Li`o, and Y. Bengio, “Graph attention networks,” in ICLR, vol.
P. Velickovic, G. Cucurull, A. Casanova, A. Romero, P. Li`o, Y. Bengio, “Graph attention network” in ICLR, vol。
0.81
abs/1710.10903, 2018.
abs/1710.10903, 2018
0.63
[34] Y. Cao, H. Peng, and S. Y. Philip, “Multi-information source hin for medical concept embedding,” in Pacific-Asia Conference on Knowledge Discovery and Data Mining.
Y. Cao, H. Peng, and S. Y. Philip, “Multi-information source hin for medical concept embeding”. Pacific-Asia Conference on Knowledge Discovery and Data Mining.
0.88
Springer, 2020, pp.
スプリンガー、2020年、p。
0.56
396–408. [35] C. Shi, Y. Li, J. Zhang, Y.
396–408. [35]C. Shi, Y. Li, J. Zhang, Y。
0.81
Sun, and P. S. Yu, “A survey of heterogeneous information network analysis,” IEEE Trans.
Sun, and P.S. Yu, “A survey of heterogeneous information network analysis”, IEEE Trans。
[36] L. Zhao and L. Akoglu, “Pairnorm: Tackling oversmoothing in gnns,” in 8th International Conference on Learning Representations, ICLR 2020, 2020.
936] L. Zhao, L. Akoglu, “Pairnorm: Tackling oversmoothing in gnns” in 8th International Conference on Learning Representations, ICLR 2020, 2020。
0.76
[37] A. L. Goldberger, L. A. Amaral, L. Glass, J. M. Hausdorff, P. C. Ivanov, R. G. Mark, J. E. Mietus, G. B. Moody, C.-K. Peng, and H. E. Stanley, “Physiobank, physiotoolkit, and physionet: components of a new research resource for complex physiologic signals,” Circulation, vol.
A. L. Goldberger, L. A. Amaral, L. Glass, J. M. Hausdorff, P. C. Ivanov, R. G. Mark, J. E. Mietus, G. B. Moody, C.-K. Peng, and H. E. Stanley, “Physiobank, physiotoolkit, and physionet: components of a new research resources for complexphysiological signal”. Circulation, vol.
0.93
101, no. 23, pp.
101、ノー。 pp. 23。
0.78
e215–e220, 2000.
e215-e220, 2000。
0.59
[38] A. E. Johnson, T. J. Pollard, L. Shen, H. L. Li-wei, M. Feng, M. Ghassemi, B. Moody, P. Szolovits, L. A. Celi, and R. G. Mark, “Mimic-iii, a freely accessible critical care database,” Scientific data, vol.
A.E. Johnson, T. J. Pollard, L. Shen, H. L. Li-wei, M. Feng, M. Ghassemi, B. Moody, P. Szolovits, L. A. Celi, R. G. Mark, “Mimic-iii, a free access critical care database”, Scientific data, Vol.
0.91
3, p. 160035, 2016.
3p.160035, 2016。
0.73
[39] A. Hosseini, T. Chen, W. Wu, Y.
[39] a. hosseini, t. chen, w. wu, y.
0.83
Sun, and M. Sarrafzadeh, “Heteromed: Heterogeneous information network for medical diagnosis,” in CIKM, 2018, pp.
Sun, and M. Sarrafzadeh, “Heteromed: Heterogeneous information network for medical diagnosis” in CIKM, 2018, pp。
0.79
763–772. [40] P. E. Rauber, A. X. Falc˜ao, and A. C. Telea, “Visualizing time-dependent
763–772. [40]P.E.ラウバー、A.X.ファルク、A.C.テレア「時間依存の可視化」
0.67
data using dynamic t-sne,” in EuroVis, 2016, pp.
data using dynamic t-sne” in eurovis, 2016 pp。
0.69
73–77. [41] B. Perozzi, R. Al-Rfou, and S. Skiena, “Deepwalk: online learning of
73–77. [41] b. perozzi, r. al-rfou, s. skiena, "deepwalk: online learning of"
0.75
social representations,” in SIGKDD, 2014, pp.
sigkdd, 2014, pp。
0.22
701–710. [42] Y. Dong, N. V. Chawla, and A. Swami, “metapath2vec: Scalable representation learning for heterogeneous networks,” in SIGKDD, 2017, pp.
701–710. [42] Y. Dong, N. V. Chawla, A. Swami, “metapath2vec: Scalable representation learning for heterogeneous networks”, SIGKDD, 2017, pp。