Deep Learning-Assisted Co-registration of Full-Spectral Autofluorescence
Lifetime Microscopic Images with H&E-Stained Histology Images
- URL: http://arxiv.org/abs/2202.07755v1
- Date: Tue, 15 Feb 2022 22:09:06 GMT
- Title: Deep Learning-Assisted Co-registration of Full-Spectral Autofluorescence
Lifetime Microscopic Images with H&E-Stained Histology Images
- Authors: Qiang Wang, Susan Fernandes, Gareth O. S. Williams, Neil Finlayson,
Ahsan R. Akram, Kevin Dhaliwal, James R. Hopgood, Marta Vallejo
- Abstract summary: We show an unsupervised image-to-image translation network that significantly improves the success of the co-registration.
The approach could be effortlessly extended to lifetime images beyond this range and other staining technologies.
- Score: 5.9861067768807
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Autofluorescence lifetime images reveal unique characteristics of endogenous
fluorescence in biological samples. Comprehensive understanding and clinical
diagnosis rely on co-registration with the gold standard, histology images,
which is extremely challenging due to the difference of both images. Here, we
show an unsupervised image-to-image translation network that significantly
improves the success of the co-registration using a conventional
optimisation-based regression network, applicable to autofluorescence lifetime
images at different emission wavelengths. A preliminary blind comparison by
experienced researchers shows the superiority of our method on co-registration.
The results also indicate that the approach is applicable to various image
formats, like fluorescence intensity images. With the registration, stitching
outcomes illustrate the distinct differences of the spectral lifetime across an
unstained tissue, enabling macro-level rapid visual identification of lung
cancer and cellular-level characterisation of cell variants and common types.
The approach could be effortlessly extended to lifetime images beyond this
range and other staining technologies.
Related papers
- A Time-Intensity Aware Pipeline for Generating Late-Stage Breast DCE-MRI using Generative Adversarial Models [0.3499870393443268]
A novel loss function that leverages the biological behavior of contrast agent (CA) in tissue is proposed to optimize a pixel-attention based generative model.
Unlike traditional normalization and standardization methods, we developed a new normalization strategy that maintains the contrast enhancement pattern across the image sequences at multiple timestamps.
arXiv Detail & Related papers (2024-09-03T04:31:49Z) - k-SALSA: k-anonymous synthetic averaging of retinal images via local
style alignment [6.36950432352094]
We introduce k-SALSA, a generative adversarial network (GAN)-based framework for synthesizing retinal fundus images.
k-SALSA brings together state-of-the-art techniques for training and inverting GANs to achieve practical performance on retinal images.
Our work represents a step toward broader sharing of retinal images for scientific collaboration.
arXiv Detail & Related papers (2023-03-20T01:47:04Z) - Stain-invariant self supervised learning for histopathology image
analysis [74.98663573628743]
We present a self-supervised algorithm for several classification tasks within hematoxylin and eosin stained images of breast cancer.
Our method achieves the state-of-the-art performance on several publicly available breast cancer datasets.
arXiv Detail & Related papers (2022-11-14T18:16:36Z) - Multi-modal Retinal Image Registration Using a Keypoint-Based Vessel
Structure Aligning Network [9.988115865060589]
We propose an end-to-end trainable deep learning method for multi-modal retinal image registration.
Our method extracts convolutional features from the vessel structure for keypoint detection and description.
The keypoint detection and description network and graph neural network are jointly trained in a self-supervised manner.
arXiv Detail & Related papers (2022-07-21T14:36:51Z) - OADAT: Experimental and Synthetic Clinical Optoacoustic Data for
Standardized Image Processing [62.993663757843464]
Optoacoustic (OA) imaging is based on excitation of biological tissues with nanosecond-duration laser pulses followed by detection of ultrasound waves generated via light-absorption-mediated thermoelastic expansion.
OA imaging features a powerful combination between rich optical contrast and high resolution in deep tissues.
No standardized datasets generated with different types of experimental set-up and associated processing methods are available to facilitate advances in broader applications of OA in clinical settings.
arXiv Detail & Related papers (2022-06-17T08:11:26Z) - Deepfake histological images for enhancing digital pathology [0.40631409309544836]
We develop a generative adversarial network model that synthesizes pathology images constrained by class labels.
We investigate the ability of this framework in synthesizing realistic prostate and colon tissue images.
We extend the approach to significantly more complex images from colon biopsies and show that the complex microenvironment in such tissues can also be reproduced.
arXiv Detail & Related papers (2022-06-16T17:11:08Z) - Texture Characterization of Histopathologic Images Using Ecological
Diversity Measures and Discrete Wavelet Transform [82.53597363161228]
This paper proposes a method for characterizing texture across histopathologic images with a considerable success rate.
It is possible to quantify the intrinsic properties of such images with promising accuracy on two HI datasets.
arXiv Detail & Related papers (2022-02-27T02:19:09Z) - Semantic segmentation of multispectral photoacoustic images using deep
learning [53.65837038435433]
Photoacoustic imaging has the potential to revolutionise healthcare.
Clinical translation of the technology requires conversion of the high-dimensional acquired data into clinically relevant and interpretable information.
We present a deep learning-based approach to semantic segmentation of multispectral photoacoustic images.
arXiv Detail & Related papers (2021-05-20T09:33:55Z) - Cross-Spectral Periocular Recognition with Conditional Adversarial
Networks [59.17685450892182]
We propose Conditional Generative Adversarial Networks, trained to con-vert periocular images between visible and near-infrared spectra.
We obtain a cross-spectral periocular performance of EER=1%, and GAR>99% @ FAR=1%, which is comparable to the state-of-the-art with the PolyU database.
arXiv Detail & Related papers (2020-08-26T15:02:04Z) - Modeling and Enhancing Low-quality Retinal Fundus Images [167.02325845822276]
Low-quality fundus images increase uncertainty in clinical observation and lead to the risk of misdiagnosis.
We propose a clinically oriented fundus enhancement network (cofe-Net) to suppress global degradation factors.
Experiments on both synthetic and real images demonstrate that our algorithm effectively corrects low-quality fundus images without losing retinal details.
arXiv Detail & Related papers (2020-05-12T08:01:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.