Fast fluorescence lifetime imaging analysis via extreme learning machine
- URL: http://arxiv.org/abs/2203.13754v1
- Date: Fri, 25 Mar 2022 16:34:51 GMT
- Title: Fast fluorescence lifetime imaging analysis via extreme learning machine
- Authors: Zhenya Zang, Dong Xiao, Quan Wang, Zinuo Li, Wujun Xie, Yu Chen, David
Day Uei Li
- Abstract summary: We present a fast and accurate analytical method for fluorescence lifetime imaging microscopy (FLIM) using the extreme learning machine (ELM)
Results indicate that ELM can obtain higher fidelity, even in low-photon conditions.
- Score: 7.7721777809498676
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a fast and accurate analytical method for fluorescence lifetime
imaging microscopy (FLIM) using the extreme learning machine (ELM). We used
extensive metrics to evaluate ELM and existing algorithms. First, we compared
these algorithms using synthetic datasets. Results indicate that ELM can obtain
higher fidelity, even in low-photon conditions. Afterwards, we used ELM to
retrieve lifetime components from human prostate cancer cells loaded with gold
nanosensors, showing that ELM also outperforms the iterative fitting and
non-fitting algorithms. By comparing ELM with a computational efficient neural
network, ELM achieves comparable accuracy with less training and inference
time. As there is no back-propagation process for ELM during the training
phase, the training speed is much higher than existing neural network
approaches. The proposed strategy is promising for edge computing with online
training.
Related papers
- Extreme Learning Machines for Fast Training of Click-Through Rate Prediction Models [0.0]
Extreme Learning Machines (ELM) provide a fast alternative to traditional gradient-based learning in neural networks.
We explore the application of ELMs for the task of Click-Through Rate (CTR) prediction.
We introduce an ELM-based model enhanced with embedding layers to improve the performance on CTR tasks.
arXiv Detail & Related papers (2024-06-25T13:50:00Z) - Fast Cerebral Blood Flow Analysis via Extreme Learning Machine [4.373558495838564]
We introduce a rapid and precise analytical approach for analyzing cerebral blood flow (CBF) using Diffuse Correlation spectroscopy (DCS)
We assess existing algorithms using synthetic datasets for both semi-infinite and multi-layer models.
Results demonstrate that ELM consistently achieves higher fidelity across various noise levels and optical parameters, showcasing robust generalization ability and outperforming iterative fitting algorithms.
arXiv Detail & Related papers (2024-01-10T23:01:35Z) - Gradual Optimization Learning for Conformational Energy Minimization [69.36925478047682]
Gradual Optimization Learning Framework (GOLF) for energy minimization with neural networks significantly reduces the required additional data.
Our results demonstrate that the neural network trained with GOLF performs on par with the oracle on a benchmark of diverse drug-like molecules.
arXiv Detail & Related papers (2023-11-05T11:48:08Z) - Fast and Accurate Reduced-Order Modeling of a MOOSE-based Additive
Manufacturing Model with Operator Learning [1.4528756508275622]
The present work is to construct a fast and accurate reduced-order model (ROM) for an additive manufacturing (AM) model.
We benchmarked the performance of these OL methods against a conventional deep neural network (DNN)-based ROM.
arXiv Detail & Related papers (2023-08-04T17:00:34Z) - HOAX: A Hyperparameter Optimization Algorithm Explorer for Neural
Networks [0.0]
The bottleneck for trajectory-based methods to study photoinduced processes is still the huge number of electronic structure calculations.
We present an innovative solution, in which the amount of electronic structure calculations is drastically reduced, by employing machine learning algorithms and methods borrowed from the realm of artificial intelligence.
arXiv Detail & Related papers (2023-02-01T11:12:35Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Braille Letter Reading: A Benchmark for Spatio-Temporal Pattern
Recognition on Neuromorphic Hardware [50.380319968947035]
Recent deep learning approaches have reached accuracy in such tasks, but their implementation on conventional embedded solutions is still computationally very and energy expensive.
We propose a new benchmark for computing tactile pattern recognition at the edge through letters reading.
We trained and compared feed-forward and recurrent spiking neural networks (SNNs) offline using back-propagation through time with surrogate gradients, then we deployed them on the Intel Loihimorphic chip for efficient inference.
Our results show that the LSTM outperforms the recurrent SNN in terms of accuracy by 14%. However, the recurrent SNN on Loihi is 237 times more energy
arXiv Detail & Related papers (2022-05-30T14:30:45Z) - Mixed Precision Low-bit Quantization of Neural Network Language Models
for Speech Recognition [67.95996816744251]
State-of-the-art language models (LMs) represented by long-short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming increasingly complex and expensive for practical applications.
Current quantization methods are based on uniform precision and fail to account for the varying performance sensitivity at different parts of LMs to quantization errors.
Novel mixed precision neural network LM quantization methods are proposed in this paper.
arXiv Detail & Related papers (2021-11-29T12:24:02Z) - Can we learn gradients by Hamiltonian Neural Networks? [68.8204255655161]
We propose a meta-learner based on ODE neural networks that learns gradients.
We demonstrate that our method outperforms a meta-learner based on LSTM for an artificial task and the MNIST dataset with ReLU activations in the optimizee.
arXiv Detail & Related papers (2021-10-31T18:35:10Z) - Learning Neural Network Quantum States with the Linear Method [0.0]
We show that the linear method can be used successfully for the optimization of complex valued neural network quantum states.
We compare the LM to the state-of-the-art SR algorithm and find that the LM requires up to an order of magnitude fewer iterations for convergence.
arXiv Detail & Related papers (2021-04-22T12:18:33Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.