Multi-Scale Representation of Follicular Lymphoma Pathology Images in a Single Hyperbolic Space
- URL: http://arxiv.org/abs/2506.18523v1
- Date: Mon, 23 Jun 2025 11:25:55 GMT
- Title: Multi-Scale Representation of Follicular Lymphoma Pathology Images in a Single Hyperbolic Space
- Authors: Kei Taguchi, Kazumasa Ohara, Tatsuya Yokota, Hiroaki Miyoshi, Noriaki Hashimoto, Ichiro Takeuchi, Hidekata Hontani,
- Abstract summary: We propose a method for representing malignant lymphoma pathology images using self-supervised learning.<n>To capture morphological changes that occur across scales during disease progression, our approach embeds tissue and corresponding nucleus images close to each other.
- Score: 16.779755785147195
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a method for representing malignant lymphoma pathology images, from high-resolution cell nuclei to low-resolution tissue images, within a single hyperbolic space using self-supervised learning. To capture morphological changes that occur across scales during disease progression, our approach embeds tissue and corresponding nucleus images close to each other based on inclusion relationships. Using the Poincar\'e ball as the feature space enables effective encoding of this hierarchical structure. The learned representations capture both disease state and cell type variations.
Related papers
- Plasticine: A Traceable Diffusion Model for Medical Image Translation [79.39689106440389]
We propose Plasticine, to the best of our knowledge, the first end-to-end image-to-image translation framework explicitly designed with traceability as a core objective.<n>Our method combines intensity translation and spatial transformation within a denoising diffusion framework.<n>This design enables the generation of synthetic images with interpretable intensity transitions and spatially coherent deformations, supporting pixel-wise traceability throughout the translation process.
arXiv Detail & Related papers (2025-12-20T18:01:57Z) - MorphGen: Morphology-Guided Representation Learning for Robust Single-Domain Generalization in Histopathological Cancer Classification [7.220226391639059]
Domain generalization in computational histopathology is hindered by heterogeneity in whole slide images.<n>We propose MorphGen, a method that integrates histopathology images, augmentations, and nuclear segmentation masks.<n>We demonstrate resilience of the learned representations to image corruptions (such as staining artifacts) and adversarial attacks.
arXiv Detail & Related papers (2025-08-30T01:59:19Z) - Integrating Pathology Foundation Models and Spatial Transcriptomics for Cellular Decomposition from Histology Images [0.0]
We propose a lightweight and training-efficient approach to predict cellular composition directly from histology images.<n>By training a lightweight multi-layer perceptron (MLP) regressor on cell-type abundances derived via cell2location, our method efficiently distills knowledge from pathology foundation models.
arXiv Detail & Related papers (2025-07-09T16:43:04Z) - Causal Disentanglement for Robust Long-tail Medical Image Generation [80.15257897500578]
We propose a novel medical image generation framework, which generates independent pathological and structural features.<n>We leverage a diffusion model guided by pathological findings to model pathological features, enabling the generation of diverse counterfactual images.
arXiv Detail & Related papers (2025-04-20T01:54:18Z) - Progressive Retinal Image Registration via Global and Local Deformable Transformations [49.032894312826244]
We propose a hybrid registration framework called HybridRetina.
We use a keypoint detector and a deformation network called GAMorph to estimate the global transformation and local deformable transformation.
Experiments on two widely-used datasets, FIRE and FLoRI21, show that our proposed HybridRetina significantly outperforms some state-of-the-art methods.
arXiv Detail & Related papers (2024-09-02T08:43:50Z) - Revisiting Adaptive Cellular Recognition Under Domain Shifts: A Contextual Correspondence View [49.03501451546763]
We identify the importance of implicit correspondences across biological contexts for exploiting domain-invariant pathological composition.
We propose self-adaptive dynamic distillation to secure instance-aware trade-offs across different model constituents.
arXiv Detail & Related papers (2024-07-14T04:41:16Z) - Diffusion Models for Counterfactual Generation and Anomaly Detection in Brain Images [39.94162291765236]
We present a weakly supervised method to generate a healthy version of a diseased image and then use it to obtain a pixel-wise anomaly map.
We employ a diffusion model trained on healthy samples and combine Denoising Diffusion Probabilistic Model (DDPM) and Denoising Implicit Model (DDIM) at each step of the sampling process.
arXiv Detail & Related papers (2023-08-03T21:56:50Z) - VesselMorph: Domain-Generalized Retinal Vessel Segmentation via
Shape-Aware Representation [12.194439938007672]
Domain shift is an inherent property of medical images and has become a major obstacle for large-scale deployment of learning-based algorithms.
We propose a method named VesselMorph which generalizes the 2D retinal vessel segmentation task by synthesizing a shape-aware representation.
VesselMorph achieves superior generalization performance compared with competing methods in different domain shift scenarios.
arXiv Detail & Related papers (2023-07-01T06:02:22Z) - Structure Embedded Nucleus Classification for Histopathology Images [51.02953253067348]
Most neural network based methods are affected by the local receptive field of convolutions.
We propose a novel polygon-structure feature learning mechanism that transforms a nucleus contour into a sequence of points sampled in order.
Next, we convert a histopathology image into a graph structure with nuclei as nodes, and build a graph neural network to embed the spatial distribution of nuclei into their representations.
arXiv Detail & Related papers (2023-02-22T14:52:06Z) - Stain based contrastive co-training for histopathological image analysis [61.87751502143719]
We propose a novel semi-supervised learning approach for classification of histovolution images.
We employ strong supervision with patch-level annotations combined with a novel co-training loss to create a semi-supervised learning framework.
We evaluate our approach in clear cell renal cell and prostate carcinomas, and demonstrate improvement over state-of-the-art semi-supervised learning methods.
arXiv Detail & Related papers (2022-06-24T22:25:31Z) - A deep residual learning implementation of Metamorphosis [4.4203363069188475]
We propose a deep residual learning implementation of Metamorphosis that drastically reduces the computational time at inference.
We also show that the proposed framework can easily integrate prior knowledge of the localization of topological changes.
We test our method on the BraTS 2021 dataset, showing that it outperforms current state-of-the-art methods in the alignment of images with brain tumors.
arXiv Detail & Related papers (2022-02-01T15:39:34Z) - Sharp-GAN: Sharpness Loss Regularized GAN for Histopathology Image
Synthesis [65.47507533905188]
Conditional generative adversarial networks have been applied to generate synthetic histopathology images.
We propose a sharpness loss regularized generative adversarial network to synthesize realistic histopathology images.
arXiv Detail & Related papers (2021-10-27T18:54:25Z) - Dermoscopic Image Classification with Neural Style Transfer [5.314466196448187]
We propose an adaptation of the Neural Style Transfer (NST) as a novel image pre-processing step for skin lesion classification problems.
We represent each dermoscopic image as the style image and transfer the style of the lesion onto a homogeneous content image.
This transfers the main variability of each lesion onto the same localized region, which allows us to integrate the generated images together and extract latent, low-rank style features.
arXiv Detail & Related papers (2021-05-17T03:50:51Z) - Learning a low dimensional manifold of real cancer tissue with
PathologyGAN [6.147958017186105]
We present a deep generative model that learns to simulate high-fidelity cancer tissue images.
The model is trained by a previously developed generative adversarial network, PathologyGAN.
We study the latent space using 249K images from two breast cancer cohorts.
arXiv Detail & Related papers (2020-04-13T16:18:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.