iPac: Incorporating Intra-image Patch Context into Graph Neural Networks for Medical Image Classification
- URL: http://arxiv.org/abs/2510.23504v1
- Date: Mon, 27 Oct 2025 16:37:16 GMT
- Title: iPac: Incorporating Intra-image Patch Context into Graph Neural Networks for Medical Image Classification
- Authors: Usama Zidan, Mohamed Gaber, Mohammed M. Abdelsamea,
- Abstract summary: iPac is a novel approach to introduce a new graph representation of images to enhance graph neural network image classification.<n>iPac integrates various stages, including patch partitioning, feature extraction, clustering, graph construction, and graph-based learning.<n>Our approach offers a versatile and generic solution for image classification, particularly in the realm of medical images.
- Score: 2.2940141855172036
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph neural networks have emerged as a promising paradigm for image processing, yet their performance in image classification tasks is hindered by a limited consideration of the underlying structure and relationships among visual entities. This work presents iPac, a novel approach to introduce a new graph representation of images to enhance graph neural network image classification by recognizing the importance of underlying structure and relationships in medical image classification. iPac integrates various stages, including patch partitioning, feature extraction, clustering, graph construction, and graph-based learning, into a unified network to advance graph neural network image classification. By capturing relevant features and organising them into clusters, we construct a meaningful graph representation that effectively encapsulates the semantics of the image. Experimental evaluation on diverse medical image datasets demonstrates the efficacy of iPac, exhibiting an average accuracy improvement of up to 5% over baseline methods. Our approach offers a versatile and generic solution for image classification, particularly in the realm of medical images, by leveraging the graph representation and accounting for the inherent structure and relationships among visual entities.
Related papers
- Fast Graph Neural Network for Image Classification [0.0]
This study introduces a novel approach that integrates Graph Convolutional Networks (GCNs) with Voronoi diagrams to enhance image classification.<n>The proposed model achieves significant improvements in both preprocessing efficiency and classification accuracy across various benchmark datasets.
arXiv Detail & Related papers (2025-08-20T17:57:59Z) - A Graph-Based Framework for Interpretable Whole Slide Image Analysis [86.37618055724441]
We develop a framework that transforms whole-slide images into biologically-informed graph representations.<n>Our approach builds graph nodes from tissue regions that respect natural structures, not arbitrary grids.<n>We demonstrate strong performance on challenging cancer staging and survival prediction tasks.
arXiv Detail & Related papers (2025-03-14T20:15:04Z) - UnSegGNet: Unsupervised Image Segmentation using Graph Neural Networks [9.268228808049951]
This research contributes to the broader field of unsupervised medical imaging and computer vision.
It presents an innovative methodology for image segmentation that aligns with real-world challenges.
The proposed method holds promise for diverse applications, including medical imaging, remote sensing, and object recognition.
arXiv Detail & Related papers (2024-05-09T19:02:00Z) - Graph Relation Distillation for Efficient Biomedical Instance
Segmentation [80.51124447333493]
We propose a graph relation distillation approach for efficient biomedical instance segmentation.
We introduce two graph distillation schemes deployed at both the intra-image level and the inter-image level.
Experimental results on a number of biomedical datasets validate the effectiveness of our approach.
arXiv Detail & Related papers (2024-01-12T04:41:23Z) - Patch-wise Graph Contrastive Learning for Image Translation [69.85040887753729]
We exploit the graph neural network to capture the topology-aware features.
We construct the graph based on the patch-wise similarity from a pretrained encoder.
In order to capture the hierarchical semantic structure, we propose the graph pooling.
arXiv Detail & Related papers (2023-12-13T15:45:19Z) - Two Stream Scene Understanding on Graph Embedding [4.78180589767256]
The paper presents a novel two-stream network architecture for enhancing scene understanding in computer vision.
The graph feature stream network comprises a segmentation structure, scene graph generation, and a graph representation module.
Experiments conducted on the ADE20K dataset demonstrate the effectiveness of the proposed two-stream network in improving image classification accuracy.
arXiv Detail & Related papers (2023-11-12T05:57:56Z) - A Comparative Study of Population-Graph Construction Methods and Graph
Neural Networks for Brain Age Regression [48.97251676778599]
In medical imaging, population graphs have demonstrated promising results, mostly for classification tasks.
extracting population graphs is a non-trivial task and can significantly impact the performance of Graph Neural Networks (GNNs)
In this work, we highlight the importance of a meaningful graph construction and experiment with different population-graph construction methods.
arXiv Detail & Related papers (2023-09-26T10:30:45Z) - Multimodal brain age estimation using interpretable adaptive
population-graph learning [58.99653132076496]
We propose a framework that learns a population graph structure optimized for the downstream task.
An attention mechanism assigns weights to a set of imaging and non-imaging features.
By visualizing the attention weights that were the most important for the graph construction, we increase the interpretability of the graph.
arXiv Detail & Related papers (2023-07-10T15:35:31Z) - Graph Self-Supervised Learning for Endoscopic Image Matching [1.8275108630751844]
We propose a novel self-supervised approach that combines Convolutional Neural Networks for capturing local visual appearance and attention-based Graph Neural Networks for modeling spatial relationships between key-points.
Our approach is trained in a fully self-supervised scheme without the need for labeled data.
Our approach outperforms state-of-the-art handcrafted and deep learning-based methods, demonstrating exceptional performance in terms of precision rate (1) and matching score (99.3%)
arXiv Detail & Related papers (2023-06-19T19:53:41Z) - Coarse-to-Fine Contrastive Learning in Image-Text-Graph Space for
Improved Vision-Language Compositionality [50.48859793121308]
Contrastively trained vision-language models have achieved remarkable progress in vision and language representation learning.
Recent research has highlighted severe limitations in their ability to perform compositional reasoning over objects, attributes, and relations.
arXiv Detail & Related papers (2023-05-23T08:28:38Z) - Graph Neural Networks for UnsupervisedDomain Adaptation of
Histopathological ImageAnalytics [22.04114134677181]
We present a novel method for the unsupervised domain adaptation for histological image analysis.
It is based on a backbone for embedding images into a feature space, and a graph neural layer for propa-gating the supervision signals of images with labels.
In experiments, our methodachieves state-of-the-art performance on four public datasets.
arXiv Detail & Related papers (2020-08-21T04:53:44Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.