TomoGraphView: 3D Medical Image Classification with Omnidirectional Slice Representations and Graph Neural Networks
- URL: http://arxiv.org/abs/2511.09605v1
- Date: Fri, 14 Nov 2025 01:01:21 GMT
- Title: TomoGraphView: 3D Medical Image Classification with Omnidirectional Slice Representations and Graph Neural Networks
- Authors: Johannes Kiechle, Stefan M. Fischer, Daniel M. Lang, Cosmin I. Bercea, Matthew J. Nyflot, Lina Felsner, Julia A. Schnabel, Jan C. Peeken,
- Abstract summary: 3D medical image classification remains a challenging task due to the complex spatial relationships and long-range dependencies inherent in accessible data.<n>Recent studies have highlighted the potential of 2D vision foundation models, originally trained on natural images, as powerful feature extractors for medical image analysis.<n>We propose TomoGraphView, a novel framework that integrates omnidirectional volume slicing with spherical graph-based feature aggregation.
- Score: 2.2906925991630085
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The growing number of medical tomography examinations has necessitated the development of automated methods capable of extracting comprehensive imaging features to facilitate downstream tasks such as tumor characterization, while assisting physicians in managing their growing workload. However, 3D medical image classification remains a challenging task due to the complex spatial relationships and long-range dependencies inherent in volumetric data. Training models from scratch suffers from low data regimes, and the absence of 3D large-scale multimodal datasets has limited the development of 3D medical imaging foundation models. Recent studies, however, have highlighted the potential of 2D vision foundation models, originally trained on natural images, as powerful feature extractors for medical image analysis. Despite these advances, existing approaches that apply 2D models to 3D volumes via slice-based decomposition remain suboptimal. Conventional volume slicing strategies, which rely on canonical planes such as axial, sagittal, or coronal, may inadequately capture the spatial extent of target structures when these are misaligned with standardized viewing planes. Furthermore, existing slice-wise aggregation strategies rarely account for preserving the volumetric structure, resulting in a loss of spatial coherence across slices. To overcome these limitations, we propose TomoGraphView, a novel framework that integrates omnidirectional volume slicing with spherical graph-based feature aggregation. We publicly share our accessible code base at http://github.com/compai-lab/2025-MedIA-kiechle and provide a user-friendly library for omnidirectional volume slicing at https://pypi.org/project/OmniSlicer.
Related papers
- Training-Free Zero-Shot Anomaly Detection in 3D Brain MRI with 2D Foundation Models [0.0]
We introduce a fully training-free framework for ZSAD in 3D brain MRI.<n>The framework constructs localized volumetric tokens by aggregating multi-axis slices processed by 2D foundation models.<n>These 3D patch tokens restore cubic spatial context and integrate directly with distance-based, batch-level anomaly detection pipelines.
arXiv Detail & Related papers (2026-02-17T02:46:45Z) - Multimodal Visual Surrogate Compression for Alzheimer's Disease Classification [69.87877580725768]
Multimodal Visual Surrogate Compression (MVSC) learns to compress and adapt large 3D sMRI volumes into compact 2D features.<n>MVSC has two key components: a Volume Context that captures global cross-slice context under textual guidance, and an Adaptive Slice Fusion module that aggregates slice-level information in a text-enhanced, patch-wise manner.
arXiv Detail & Related papers (2026-01-29T13:05:46Z) - Structured Spectral Graph Representation Learning for Multi-label Abnormality Analysis from 3D CT Scans [0.0]
Multi-label classification of 3D Chest CT scans remains a critical yet challenging problem.<n>Existing methods based on 3D convolutional neural networks struggle to capture long-range dependencies.<n>We propose a new graph-based framework that represents 3D CT volumes as structured graphs.
arXiv Detail & Related papers (2025-10-12T19:49:51Z) - Neural Image Unfolding: Flattening Sparse Anatomical Structures using Neural Fields [6.5082099033254135]
Tomographic imaging reveals internal structures of 3D objects and is crucial for medical diagnoses.<n>Various organ-specific unfolding techniques exist to map their densely sampled 3D surfaces to a distortion-minimized 2D representation.<n>We deploy a neural field to fit the transformation of the anatomy of interest to a 2D overview image.
arXiv Detail & Related papers (2024-11-27T14:58:49Z) - E3D-GPT: Enhanced 3D Visual Foundation for Medical Vision-Language Model [23.56751925900571]
The development of 3D medical vision-language models holds significant potential for disease diagnosis and patient treatment.
We utilize self-supervised learning to construct a 3D visual foundation model for extracting 3D visual features.
We apply 3D spatial convolutions to aggregate and project high-level image features, reducing computational complexity.
Our model demonstrates superior performance compared to existing methods in report generation, visual question answering, and disease diagnosis.
arXiv Detail & Related papers (2024-10-18T06:31:40Z) - Generative Enhancement for 3D Medical Images [74.17066529847546]
We propose GEM-3D, a novel generative approach to the synthesis of 3D medical images.
Our method begins with a 2D slice, noted as the informed slice to serve the patient prior, and propagates the generation process using a 3D segmentation mask.
By decomposing the 3D medical images into masks and patient prior information, GEM-3D offers a flexible yet effective solution for generating versatile 3D images.
arXiv Detail & Related papers (2024-03-19T15:57:04Z) - On the Localization of Ultrasound Image Slices within Point Distribution
Models [84.27083443424408]
Thyroid disorders are most commonly diagnosed using high-resolution Ultrasound (US)
Longitudinal tracking is a pivotal diagnostic protocol for monitoring changes in pathological thyroid morphology.
We present a framework for automated US image slice localization within a 3D shape representation.
arXiv Detail & Related papers (2023-09-01T10:10:46Z) - Spatiotemporal Modeling Encounters 3D Medical Image Analysis:
Slice-Shift UNet with Multi-View Fusion [0.0]
We propose a new 2D-based model dubbed Slice SHift UNet which encodes three-dimensional features at 2D CNN's complexity.
More precisely multi-view features are collaboratively learned by performing 2D convolutions along the three planes of a volume.
The effectiveness of our approach is validated in Multi-Modality Abdominal Multi-Organ axis (AMOS) and Multi-Atlas Labeling Beyond the Cranial Vault (BTCV) datasets.
arXiv Detail & Related papers (2023-07-24T14:53:23Z) - Interpretable 2D Vision Models for 3D Medical Images [47.75089895500738]
This study proposes a simple approach of adapting 2D networks with an intermediate feature representation for processing 3D images.
We show on all 3D MedMNIST datasets as benchmark and two real-world datasets consisting of several hundred high-resolution CT or MRI scans that our approach performs on par with existing methods.
arXiv Detail & Related papers (2023-07-13T08:27:09Z) - LVM-Med: Learning Large-Scale Self-Supervised Vision Models for Medical
Imaging via Second-order Graph Matching [59.01894976615714]
We introduce LVM-Med, the first family of deep networks trained on large-scale medical datasets.
We have collected approximately 1.3 million medical images from 55 publicly available datasets.
LVM-Med empirically outperforms a number of state-of-the-art supervised, self-supervised, and foundation models.
arXiv Detail & Related papers (2023-06-20T22:21:34Z) - Weakly Supervised Volumetric Image Segmentation with Deformed Templates [80.04326168716493]
We propose an approach that is truly weakly-supervised in the sense that we only need to provide a sparse set of 3D point on the surface of target objects.
We will show that it outperforms a more traditional approach to weak-supervision in 3D at a reduced supervision cost.
arXiv Detail & Related papers (2021-06-07T22:09:34Z) - VC-Net: Deep Volume-Composition Networks for Segmentation and
Visualization of Highly Sparse and Noisy Image Data [13.805816310795256]
We present an end-to-end deep learning method, VC-Net, for robust extraction of 3D microvasculature.
The core novelty is to automatically leverage the volume visualization technique (MIP) to enhance the 3D data exploration.
A multi-stream convolutional neural network is proposed to learn the 3D volume and 2D MIP features respectively and then explore their inter-dependencies in a joint volume-composition embedding space.
arXiv Detail & Related papers (2020-09-14T04:15:02Z) - Hierarchical Amortized Training for Memory-efficient High Resolution 3D
GAN [52.851990439671475]
We propose a novel end-to-end GAN architecture that can generate high-resolution 3D images.
We achieve this goal by using different configurations between training and inference.
Experiments on 3D thorax CT and brain MRI demonstrate that our approach outperforms state of the art in image generation.
arXiv Detail & Related papers (2020-08-05T02:33:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.