Coupling a Recurrent Neural Network to SPAD TCSPC Systems for Real-time
Fluorescence Lifetime Imaging
- URL: http://arxiv.org/abs/2306.15599v2
- Date: Mon, 24 Jul 2023 14:41:40 GMT
- Title: Coupling a Recurrent Neural Network to SPAD TCSPC Systems for Real-time
Fluorescence Lifetime Imaging
- Authors: Yang Lin, Paul Mos, Andrei Ardelean, Claudio Bruschini, Edoardo
Charbon
- Abstract summary: Fluorescence lifetime imaging (FLI) has been receiving increased attention in recent years as a powerful diagnostic technique in biological and medical research.
Existing FLI systems often suffer from a tradeoff between processing speed, accuracy, and robustness.
We propose a robust approach that enables fast FLI with no degradation of accuracy.
- Score: 4.49533352963549
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fluorescence lifetime imaging (FLI) has been receiving increased attention in
recent years as a powerful diagnostic technique in biological and medical
research. However, existing FLI systems often suffer from a tradeoff between
processing speed, accuracy, and robustness. In this paper, we propose a robust
approach that enables fast FLI with no degradation of accuracy. The approach is
based on a SPAD TCSPC system coupled to a recurrent neural network (RNN) that
accurately estimates the fluorescence lifetime directly from raw timestamps
without building histograms, thereby drastically reducing transfer data volumes
and hardware resource utilization, thus enabling FLI acquisition at video rate.
We train two variants of the RNN on a synthetic dataset and compare the results
to those obtained using center-of-mass method (CMM) and least squares fitting
(LS fitting). Results demonstrate that two RNN variants, gated recurrent unit
(GRU) and long short-term memory (LSTM), are comparable to CMM and LS fitting
in terms of accuracy, while outperforming them in background noise by a large
margin. To explore the ultimate limits of the approach, we derived the
Cramer-Rao lower bound of the measurement, showing that RNN yields lifetime
estimations with near-optimal precision. Moreover, our FLI model, which is
purely trained on synthetic datasets, works well with never-seen-before,
real-world data. To demonstrate real-time operation, we have built a FLI
microscope based on Piccolo, a 32x32 SPAD sensor developed in our lab. Four
quantized GRU cores, capable of processing up to 4 million photons per second,
are deployed on a Xilinx Kintex-7 FPGA. Powered by the GRU, the FLI setup can
retrieve real-time fluorescence lifetime images at up to 10 frames per second.
The proposed FLI system is promising and ideally suited for biomedical
applications.
Related papers
- Unlocking Real-Time Fluorescence Lifetime Imaging: Multi-Pixel Parallelism for FPGA-Accelerated Processing [2.369919866595525]
We propose a method to achieve real-time FLI using an FPGA-based hardware accelerator.
We implement a GRU-based sequence-to-sequence (Seq2Seq) model on an FPGA board compatible with time-resolved cameras.
By integrating a GRU-based Seq2Seq model and its compressed version, called Seq2SeqLite, we were able to process multiple pixels in parallel, reducing latency compared to sequential processing.
arXiv Detail & Related papers (2024-10-09T18:24:23Z) - Compressing Recurrent Neural Networks for FPGA-accelerated Implementation in Fluorescence Lifetime Imaging [3.502427552446068]
Deep learning models enable real-time inference, but can be computationally demanding due to complex architectures and large matrix operations.
This makes DL models ill-suited for direct implementation on field-programmable gate array (FPGA)-based camera hardware.
In this work, we focus on compressing recurrent neural networks (RNNs), which are well-suited for FLI time-series data processing, to enable deployment on resource-constrained FPGA boards.
arXiv Detail & Related papers (2024-10-01T17:23:26Z) - rule4ml: An Open-Source Tool for Resource Utilization and Latency Estimation for ML Models on FPGA [0.0]
This paper introduces a novel method to predict the resource utilization and inference latency of Neural Networks (NNs) before their synthesis and implementation on FPGA.
We leverage HLS4ML, a tool-flow that helps translate NNs into high-level synthesis (HLS) code.
Our method uses trained regression models for immediate pre-synthesis predictions.
arXiv Detail & Related papers (2024-08-09T19:35:10Z) - Empowering Snapshot Compressive Imaging: Spatial-Spectral State Space Model with Across-Scanning and Local Enhancement [51.557804095896174]
We introduce a State Space Model with Across-Scanning and Local Enhancement, named ASLE-SSM, that employs a Spatial-Spectral SSM for global-local balanced context encoding and cross-channel interaction promoting.
Experimental results illustrate ASLE-SSM's superiority over existing state-of-the-art methods, with an inference speed 2.4 times faster than Transformer-based MST and saving 0.12 (M) of parameters.
arXiv Detail & Related papers (2024-08-01T15:14:10Z) - Theoretical framework for real time sub-micron depth monitoring using
quantum inline coherent imaging [55.2480439325792]
Inline Coherent Imaging (ICI) is a reliable method for real-time monitoring of various laser processes, including keyhole welding, additive manufacturing, and micromachining.
The axial resolution is limited to greater than 2 mum making ICI unsuitable for monitoring submicron processes.
Advancements in Quantum Optical Coherence Tomography (Q OCT) has the potential to address this issue by achieving better than 1 mum depth resolution.
arXiv Detail & Related papers (2023-09-17T17:05:21Z) - Bayesian Neural Network Language Modeling for Speech Recognition [59.681758762712754]
State-of-the-art neural network language models (NNLMs) represented by long short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming highly complex.
In this paper, an overarching full Bayesian learning framework is proposed to account for the underlying uncertainty in LSTM-RNN and Transformer LMs.
arXiv Detail & Related papers (2022-08-28T17:50:19Z) - Optical-Flow-Reuse-Based Bidirectional Recurrent Network for Space-Time
Video Super-Resolution [52.899234731501075]
Space-time video super-resolution (ST-VSR) simultaneously increases the spatial resolution and frame rate for a given video.
Existing methods typically suffer from difficulties in how to efficiently leverage information from a large range of neighboring frames.
We propose a coarse-to-fine bidirectional recurrent neural network instead of using ConvLSTM to leverage knowledge between adjacent frames.
arXiv Detail & Related papers (2021-10-13T15:21:30Z) - Accelerating Recurrent Neural Networks for Gravitational Wave
Experiments [1.9263019320519579]
We have developed a new architecture capable of accelerating RNN inference for analyzing time-series data from LIGO detectors.
A customizable template for this architecture has been designed, which enables the generation of low-latency FPGA designs.
arXiv Detail & Related papers (2021-06-26T20:44:02Z) - Learning representations with end-to-end models for improved remaining
useful life prognostics [64.80885001058572]
The remaining Useful Life (RUL) of equipment is defined as the duration between the current time and its failure.
We propose an end-to-end deep learning model based on multi-layer perceptron and long short-term memory layers (LSTM) to predict the RUL.
We will discuss how the proposed end-to-end model is able to achieve such good results and compare it to other deep learning and state-of-the-art methods.
arXiv Detail & Related papers (2021-04-11T16:45:18Z) - Automatic Remaining Useful Life Estimation Framework with Embedded
Convolutional LSTM as the Backbone [5.927250637620123]
We propose a new LSTM variant called embedded convolutional LSTM (E NeuralTM)
In ETM a group of different 1D convolutions is embedded into the LSTM structure. Through this, the temporal information is preserved between and within windows.
We show the superiority of our proposed ETM approach over the state-of-the-art approaches on several widely used benchmark data sets for RUL Estimation.
arXiv Detail & Related papers (2020-08-10T08:34:20Z) - Unlimited Resolution Image Generation with R2D2-GANs [69.90258455164513]
We present a novel simulation technique for generating high quality images of any predefined resolution.
This method can be used to synthesize sonar scans of size equivalent to those collected during a full-length mission.
The data produced is continuous, realistically-looking, and can also be generated at least two times faster than the real speed of acquisition.
arXiv Detail & Related papers (2020-03-02T17:49:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.