TaGAT: Topology-Aware Graph Attention Network For Multi-modal Retinal Image Fusion
- URL: http://arxiv.org/abs/2407.14188v1
- Date: Fri, 19 Jul 2024 10:32:06 GMT
- Title: TaGAT: Topology-Aware Graph Attention Network For Multi-modal Retinal Image Fusion
- Authors: Xin Tian, Nantheera Anantrasirichai, Lindsay Nicholson, Alin Achim,
- Abstract summary: We propose the Topology-Aware Graph Attention Network (TaGAT) for multi-modal retinal image fusion.
Our model outperforms state-of-the-art methods in Fluorescein Fundus Angiography (FFA) with Color Fundus (CF) and Optical Coherence Tomography ( OCT) with confocal microscopy retinal image fusion.
- Score: 11.321411104729002
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In the realm of medical image fusion, integrating information from various modalities is crucial for improving diagnostics and treatment planning, especially in retinal health, where the important features exhibit differently in different imaging modalities. Existing deep learning-based approaches insufficiently focus on retinal image fusion, and thus fail to preserve enough anatomical structure and fine vessel details in retinal image fusion. To address this, we propose the Topology-Aware Graph Attention Network (TaGAT) for multi-modal retinal image fusion, leveraging a novel Topology-Aware Encoder (TAE) with Graph Attention Networks (GAT) to effectively enhance spatial features with retinal vasculature's graph topology across modalities. The TAE encodes the base and detail features, extracted via a Long-short Range (LSR) encoder from retinal images, into the graph extracted from the retinal vessel. Within the TAE, the GAT-based Graph Information Update (GIU) block dynamically refines and aggregates the node features to generate topology-aware graph features. The updated graph features with base and detail features are combined and decoded as a fused image. Our model outperforms state-of-the-art methods in Fluorescein Fundus Angiography (FFA) with Color Fundus (CF) and Optical Coherence Tomography (OCT) with confocal microscopy retinal image fusion. The source code can be accessed via https://github.com/xintian-99/TaGAT.
Related papers
- Progressive Retinal Image Registration via Global and Local Deformable Transformations [49.032894312826244]
We propose a hybrid registration framework called HybridRetina.
We use a keypoint detector and a deformation network called GAMorph to estimate the global transformation and local deformable transformation.
Experiments on two widely-used datasets, FIRE and FLoRI21, show that our proposed HybridRetina significantly outperforms some state-of-the-art methods.
arXiv Detail & Related papers (2024-09-02T08:43:50Z) - Mew: Multiplexed Immunofluorescence Image Analysis through an Efficient Multiplex Network [84.88767228835928]
We introduce Mew, a novel framework designed to efficiently process mIF images through the lens of multiplex network.
Mew innovatively constructs a multiplex network comprising two distinct layers: a Voronoi network for geometric information and a Cell-type network for capturing cell-wise homogeneity.
This framework equips a scalable and efficient Graph Neural Network (GNN), capable of processing the entire graph during training.
arXiv Detail & Related papers (2024-07-25T08:22:30Z) - Learning Multimodal Volumetric Features for Large-Scale Neuron Tracing [72.45257414889478]
We aim to reduce human workload by predicting connectivity between over-segmented neuron pieces.
We first construct a dataset, named FlyTracing, that contains millions of pairwise connections of segments expanding the whole fly brain.
We propose a novel connectivity-aware contrastive learning method to generate dense volumetric EM image embedding.
arXiv Detail & Related papers (2024-01-05T19:45:12Z) - Learning a Graph Neural Network with Cross Modality Interaction for
Image Fusion [23.296468921842948]
Infrared and visible image fusion has gradually proved to be a vital fork in the field of multi-modality imaging technologies.
We propose an interactive graph neural network (GNN)-based architecture between cross modality for fusion, called IGNet.
Our IGNet can generate visually appealing fused images while scoring averagely 2.59% mAP@.5 and 7.77% mIoU higher in detection and segmentation.
arXiv Detail & Related papers (2023-08-07T02:25:06Z) - Multi-Scale Relational Graph Convolutional Network for Multiple Instance
Learning in Histopathology Images [2.6663738081163726]
We introduce the Multi-Scale Graph Convolutional Network (MS-RGCN) as a multiple learning method.
We model histopathology image patches and their relation with neighboring patches and patches at other scales as a graph.
We experiment on prostate cancer histopathology images to predict magnification groups based on the extracted features from patches.
arXiv Detail & Related papers (2022-12-17T02:26:42Z) - CoCoNet: Coupled Contrastive Learning Network with Multi-level Feature
Ensemble for Multi-modality Image Fusion [72.8898811120795]
We propose a coupled contrastive learning network, dubbed CoCoNet, to realize infrared and visible image fusion.
Our method achieves state-of-the-art (SOTA) performance under both subjective and objective evaluation.
arXiv Detail & Related papers (2022-11-20T12:02:07Z) - Affinity Feature Strengthening for Accurate, Complete and Robust Vessel
Segmentation [48.638327652506284]
Vessel segmentation is crucial in many medical image applications, such as detecting coronary stenoses, retinal vessel diseases and brain aneurysms.
We present a novel approach, the affinity feature strengthening network (AFN), which jointly models geometry and refines pixel-wise segmentation features using a contrast-insensitive, multiscale affinity approach.
arXiv Detail & Related papers (2022-11-12T05:39:17Z) - Multi-modal Retinal Image Registration Using a Keypoint-Based Vessel
Structure Aligning Network [9.988115865060589]
We propose an end-to-end trainable deep learning method for multi-modal retinal image registration.
Our method extracts convolutional features from the vessel structure for keypoint detection and description.
The keypoint detection and description network and graph neural network are jointly trained in a self-supervised manner.
arXiv Detail & Related papers (2022-07-21T14:36:51Z) - A novel approach for glaucoma classification by wavelet neural networks
using graph-based, statisitcal features of qualitatively improved images [0.0]
We have proposed a new glaucoma classification approach that employs a wavelet neural network (WNN) on optimally enhanced retinal images features.
The performance of the WNN classifier is compared with multilayer perceptron neural networks with various datasets.
arXiv Detail & Related papers (2022-06-24T06:19:30Z) - A Keypoint Detection and Description Network Based on the Vessel
Structure for Multi-Modal Retinal Image Registration [0.0]
Multiple images with different modalities or acquisition times are often analyzed for the diagnosis of retinal diseases.
Our method uses a convolutional neural network to extract features of the vessel structure in multi-modal retinal images.
arXiv Detail & Related papers (2022-01-06T20:43:35Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.