Graph-based Pyramid Global Context Reasoning with a Saliency-aware
Projection for COVID-19 Lung Infections Segmentation
- URL: http://arxiv.org/abs/2103.04235v1
- Date: Sun, 7 Mar 2021 02:28:10 GMT
- Title: Graph-based Pyramid Global Context Reasoning with a Saliency-aware
Projection for COVID-19 Lung Infections Segmentation
- Authors: Huimin Huang, Ming Cai, Lanfen Lin, Jing Zheng, Xiongwei Mao, Xiaohan
Qian, Zhiyi Peng, Jianying Zhou, Yutaro Iwamoto, Xian-Hua Han, Yen-Wei Chen,
Ruofeng Tong
- Abstract summary: We propose a Graph-based Pyramid Global Context Reasoning (Graph-PGCR) module.
It is capable of modeling long-range dependencies among disjoint infections as well as adapt size variation.
Our Graph- PGCR module is plug-and-play, which can be integrated into any architecture to improve its performance.
- Score: 16.94939282349418
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Coronavirus Disease 2019 (COVID-19) has rapidly spread in 2020, emerging a
mass of studies for lung infection segmentation from CT images. Though many
methods have been proposed for this issue, it is a challenging task because of
infections of various size appearing in different lobe zones. To tackle these
issues, we propose a Graph-based Pyramid Global Context Reasoning (Graph-PGCR)
module, which is capable of modeling long-range dependencies among disjoint
infections as well as adapt size variation. We first incorporate graph
convolution to exploit long-term contextual information from multiple lobe
zones. Different from previous average pooling or maximum object probability,
we propose a saliency-aware projection mechanism to pick up infection-related
pixels as a set of graph nodes. After graph reasoning, the relation-aware
features are reversed back to the original coordinate space for the down-stream
tasks. We further con- struct multiple graphs with different sampling rates to
handle the size variation problem. To this end, distinct multi-scale long-range
contextual patterns can be captured. Our Graph- PGCR module is plug-and-play,
which can be integrated into any architecture to improve its performance.
Experiments demonstrated that the proposed method consistently boost the
performance of state-of-the-art backbone architectures on both of public and
our private COVID-19 datasets.
Related papers
- DA-MoE: Addressing Depth-Sensitivity in Graph-Level Analysis through Mixture of Experts [70.21017141742763]
Graph neural networks (GNNs) are gaining popularity for processing graph-structured data.
Existing methods generally use a fixed number of GNN layers to generate representations for all graphs.
We propose the depth adaptive mixture of expert (DA-MoE) method, which incorporates two main improvements to GNN.
arXiv Detail & Related papers (2024-11-05T11:46:27Z) - InstructG2I: Synthesizing Images from Multimodal Attributed Graphs [50.852150521561676]
We propose a graph context-conditioned diffusion model called InstructG2I.
InstructG2I first exploits the graph structure and multimodal information to conduct informative neighbor sampling.
A Graph-QFormer encoder adaptively encodes the graph nodes into an auxiliary set of graph prompts to guide the denoising process.
arXiv Detail & Related papers (2024-10-09T17:56:15Z) - DuoGNN: Topology-aware Graph Neural Network with Homophily and Heterophily Interaction-Decoupling [0.0]
Graph Neural Networks (GNNs) have proven effective in various medical imaging applications, such as automated disease diagnosis.
They inherently suffer from two fundamental limitations: first, indistinguishable node embeddings due to heterophilic node aggregation.
We propose DuoGNN, a scalable and generalizable architecture which leverages topology to decouple homophilic and heterophilic edges.
arXiv Detail & Related papers (2024-09-29T09:01:22Z) - MM-GTUNets: Unified Multi-Modal Graph Deep Learning for Brain Disorders Prediction [8.592259720470697]
We propose MM-GTUNets, an end-to-end graph transformer based multi-modal graph deep learning framework for brain disorders prediction.
We introduce Modality Reward Representation Learning (MRRL) which adaptively constructs population graphs using a reward system.
We also propose Adaptive Cross-Modal Graph Learning (ACMGL), which captures critical modality-specific and modality-shared features.
arXiv Detail & Related papers (2024-06-20T16:14:43Z) - Learning Long Range Dependencies on Graphs via Random Walks [6.7864586321550595]
Message-passing graph neural networks (GNNs) excel at capturing local relationships but struggle with long-range dependencies in graphs.
graph transformers (GTs) enable global information exchange but often oversimplify the graph structure by representing graphs as sets of fixed-length vectors.
This work introduces a novel architecture that overcomes the shortcomings of both approaches by combining the long-range information of random walks with local message passing.
arXiv Detail & Related papers (2024-06-05T15:36:57Z) - MUSTANG: Multi-Stain Self-Attention Graph Multiple Instance Learning
Pipeline for Histopathology Whole Slide Images [1.127806343149511]
Whole Slide Images (WSIs) present a challenging computer vision task due to their gigapixel size and presence of artefacts.
Real-world clinical datasets tend to come as sets of heterogeneous WSIs with labels present at the patient-level, with poor to no annotations.
Here we propose an end-to-end multi-stain self-attention graph (MUSTANG) multiple instance learning pipeline.
arXiv Detail & Related papers (2023-09-19T14:30:14Z) - Skeleton-Parted Graph Scattering Networks for 3D Human Motion Prediction [120.08257447708503]
Graph convolutional network based methods that model the body-joints' relations, have recently shown great promise in 3D skeleton-based human motion prediction.
We propose a novel skeleton-parted graph scattering network (SPGSN)
SPGSN outperforms state-of-the-art methods by remarkable margins of 13.8%, 9.3% and 2.7% in terms of 3D mean per joint position error (MPJPE) on Human3.6M, CMU Mocap and 3DPW datasets, respectively.
arXiv Detail & Related papers (2022-07-31T05:51:39Z) - Deep Graph-level Anomaly Detection by Glocal Knowledge Distillation [61.39364567221311]
Graph-level anomaly detection (GAD) describes the problem of detecting graphs that are abnormal in their structure and/or the features of their nodes.
One of the challenges in GAD is to devise graph representations that enable the detection of both locally- and globally-anomalous graphs.
We introduce a novel deep anomaly detection approach for GAD that learns rich global and local normal pattern information by joint random distillation of graph and node representations.
arXiv Detail & Related papers (2021-12-19T05:04:53Z) - PSGR: Pixel-wise Sparse Graph Reasoning for COVID-19 Pneumonia
Segmentation in CT Images [83.26057031236965]
We propose a pixel-wise sparse graph reasoning (PSGR) module to enhance the modeling of long-range dependencies for COVID-19 infected region segmentation in CT images.
The PSGR module avoids imprecise pixel-to-node projections and preserves the inherent information of each pixel for global reasoning.
The solution has been evaluated against four widely-used segmentation models on three public datasets.
arXiv Detail & Related papers (2021-08-09T04:58:23Z) - Learning Multi-Granular Hypergraphs for Video-Based Person
Re-Identification [110.52328716130022]
Video-based person re-identification (re-ID) is an important research topic in computer vision.
We propose a novel graph-based framework, namely Multi-Granular Hypergraph (MGH) to better representational capabilities.
90.0% top-1 accuracy on MARS is achieved using MGH, outperforming the state-of-the-arts schemes.
arXiv Detail & Related papers (2021-04-30T11:20:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.