Deep histological synthesis from mass spectrometry imaging for multimodal registration
- URL: http://arxiv.org/abs/2506.05441v1
- Date: Thu, 05 Jun 2025 12:45:52 GMT
- Title: Deep histological synthesis from mass spectrometry imaging for multimodal registration
- Authors: Kimberley M. Bird, Xujiong Ye, Alan M. Race, James M. Brown,
- Abstract summary: This work proposes a solution that synthesises histological images from MSI, using a pix2pix model, to effectively enable unimodal registration.<n>Preliminary results show promising synthetic histology images with limited artifacts, achieving increases in mutual information (MI) and structural similarity index measures (SSIM) of +0.924 and +0.419, respectively.
- Score: 0.5495251046328457
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Registration of histological and mass spectrometry imaging (MSI) allows for more precise identification of structural changes and chemical interactions in tissue. With histology and MSI having entirely different image formation processes and dimensionalities, registration of the two modalities remains an ongoing challenge. This work proposes a solution that synthesises histological images from MSI, using a pix2pix model, to effectively enable unimodal registration. Preliminary results show promising synthetic histology images with limited artifacts, achieving increases in mutual information (MI) and structural similarity index measures (SSIM) of +0.924 and +0.419, respectively, compared to a baseline U-Net model. Our source code is available on GitHub: https://github.com/kimberley/MIUA2025.
Related papers
- Pyramid Hierarchical Masked Diffusion Model for Imaging Synthesis [6.475175425060296]
The paper presents a novel image synthesis network, the Pyramid Hierarchical Masked Diffusion Model (PHMDiff)<n>Experiments on two challenging datasets demonstrate that PHMDiff achieves superior performance in both the Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM)<n>The PHMDiff model, a multi-scale image synthesis framework across and within medical imaging modalities, shows significant advantages over other methods.
arXiv Detail & Related papers (2025-07-22T13:30:54Z) - CSG: A Context-Semantic Guided Diffusion Approach in De Novo Musculoskeletal Ultrasound Image Generation [1.727597257312416]
We introduce a scalable semantic and context-conditioned generative model, coined CSG (Context-Semantic Guidance).<n>We demonstrate the ability of CSG to generate findings (pathological anomalies) in musculoskeletal (MSK) ultrasound images.<n>Results show that the synthetic images generated by CSG improve the performance of semantic segmentation models, exhibit enhanced similarity to real images, and are undistinguishable from real images according to a Turing test.
arXiv Detail & Related papers (2024-12-08T06:48:09Z) - A Unified Model for Compressed Sensing MRI Across Undersampling Patterns [69.19631302047569]
We propose a unified MRI reconstruction model robust to various measurement undersampling patterns and image resolutions.<n>Our model improves SSIM by 11% and PSNR by 4 dB over a state-of-the-art CNN (End-to-End VarNet) with 600$times$ faster inference than diffusion methods.
arXiv Detail & Related papers (2024-10-05T20:03:57Z) - VM-DDPM: Vision Mamba Diffusion for Medical Image Synthesis [0.8111815974227898]
We propose the Vision Mamba DDPM (VM-DDPM) based on State Space Model (SSM)
To our best knowledge, this is the first medical image synthesis model based on the SSM-CNN hybrid architecture.
Our experimental evaluation on three datasets of different scales, i.e., ACDC, BraTS2018, and ChestXRay, demonstrate that VM-DDPM achieves state-of-the-art performance.
arXiv Detail & Related papers (2024-05-09T10:41:18Z) - MS-MANO: Enabling Hand Pose Tracking with Biomechanical Constraints [50.61346764110482]
We integrate a musculoskeletal system with a learnable parametric hand model, MANO, to create MS-MANO.
This model emulates the dynamics of muscles and tendons to drive the skeletal system, imposing physiologically realistic constraints on the resulting torque trajectories.
We also propose a simulation-in-the-loop pose refinement framework, BioPR, that refines the initial estimated pose through a multi-layer perceptron network.
arXiv Detail & Related papers (2024-04-16T02:18:18Z) - An Attentive-based Generative Model for Medical Image Synthesis [18.94900480135376]
We propose an attention-based dual contrast generative model, called ADC-cycleGAN, which can synthesize medical images from unpaired data with multiple slices.
The model integrates a dual contrast loss term with the CycleGAN loss to ensure that the synthesized images are distinguishable from the source domain.
Experimental results demonstrate that the proposed ADC-cycleGAN model produces comparable samples to other state-of-the-art generative models.
arXiv Detail & Related papers (2023-06-02T14:17:37Z) - MRIS: A Multi-modal Retrieval Approach for Image Synthesis on Diverse
Modalities [19.31577453889188]
We develop an approach based on multi-modal metric learning to synthesize images of diverse modalities.
We test our approach by synthesizing cartilage thickness maps obtained from 3D magnetic resonance (MR) images using 2D radiographs.
arXiv Detail & Related papers (2023-03-17T20:58:55Z) - AMIGO: Sparse Multi-Modal Graph Transformer with Shared-Context
Processing for Representation Learning of Giga-pixel Images [53.29794593104923]
We present a novel concept of shared-context processing for whole slide histopathology images.
AMIGO uses the celluar graph within the tissue to provide a single representation for a patient.
We show that our model is strongly robust to missing information to an extent that it can achieve the same performance with as low as 20% of the data.
arXiv Detail & Related papers (2023-03-01T23:37:45Z) - A Self-attention Guided Multi-scale Gradient GAN for Diversified X-ray
Image Synthesis [0.6308539010172307]
Generative Adversarial Networks (GANs) are utilized to address the data limitation problem via the generation of synthetic images.
Training challenges such as mode collapse, non-convergence, and instability degrade a GAN's performance in synthesizing diversified and high-quality images.
This work proposes an attention-guided multi-scale gradient GAN architecture to model the relationship between long-range dependencies of biomedical image features.
arXiv Detail & Related papers (2022-10-09T13:17:17Z) - SIAN: Style-Guided Instance-Adaptive Normalization for Multi-Organ
Histopathology Image Synthesis [63.845552349914186]
We propose a style-guided instance-adaptive normalization (SIAN) to synthesize realistic color distributions and textures for different organs.
The four phases work together and are integrated into a generative network to embed image semantics, style, and instance-level boundaries.
arXiv Detail & Related papers (2022-09-02T16:45:46Z) - Identity-Aware CycleGAN for Face Photo-Sketch Synthesis and Recognition [61.87842307164351]
We first propose an Identity-Aware CycleGAN (IACycleGAN) model that applies a new perceptual loss to supervise the image generation network.
It improves CycleGAN on photo-sketch synthesis by paying more attention to the synthesis of key facial regions, such as eyes and nose.
We develop a mutual optimization procedure between the synthesis model and the recognition model, which iteratively synthesizes better images by IACycleGAN.
arXiv Detail & Related papers (2021-03-30T01:30:08Z) - Lesion Mask-based Simultaneous Synthesis of Anatomic and MolecularMR
Images using a GAN [59.60954255038335]
The proposed framework consists of a stretch-out up-sampling module, a brain atlas encoder, a segmentation consistency module, and multi-scale label-wise discriminators.
Experiments on real clinical data demonstrate that the proposed model can perform significantly better than the state-of-the-art synthesis methods.
arXiv Detail & Related papers (2020-06-26T02:50:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.