Optimizations of Autoencoders for Analysis and Classification of
Microscopic In Situ Hybridization Images
- URL: http://arxiv.org/abs/2304.09656v1
- Date: Wed, 19 Apr 2023 13:45:28 GMT
- Title: Optimizations of Autoencoders for Analysis and Classification of
Microscopic In Situ Hybridization Images
- Authors: Aleksandar A. Yanev, Galina D. Momcheva, Stoyan P. Pavlov
- Abstract summary: We propose a deep-learning framework to detect and classify areas of microscopic images with similar levels of gene expression.
The data we analyze requires an unsupervised learning model for which we employ a type of Artificial Neural Network - Deep Learning Autoencoders.
- Score: 68.8204255655161
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Currently, analysis of microscopic In Situ Hybridization images is done
manually by experts. Precise evaluation and classification of such microscopic
images can ease experts' work and reveal further insights about the data. In
this work, we propose a deep-learning framework to detect and classify areas of
microscopic images with similar levels of gene expression. The data we analyze
requires an unsupervised learning model for which we employ a type of
Artificial Neural Network - Deep Learning Autoencoders. The model's performance
is optimized by balancing the latent layers' length and complexity and
fine-tuning hyperparameters. The results are validated by adapting the
mean-squared error (MSE) metric, and comparison to expert's evaluation.
Related papers
- CoTCoNet: An Optimized Coupled Transformer-Convolutional Network with an Adaptive Graph Reconstruction for Leukemia Detection [0.3573481101204926]
We propose an optimized Coupled Transformer Convolutional Network (CoTCoNet) framework for the classification of leukemia.
Our framework captures comprehensive global features and scalable spatial patterns, enabling the identification of complex and large-scale hematological features.
It achieves remarkable accuracy and F1-Score rates of 0.9894 and 0.9893, respectively.
arXiv Detail & Related papers (2024-10-11T13:31:28Z) - TSynD: Targeted Synthetic Data Generation for Enhanced Medical Image Classification [0.011037620731410175]
This work aims to guide the generative model to synthesize data with high uncertainty.
We alter the feature space of the autoencoder through an optimization process.
We improve the robustness against test time data augmentations and adversarial attacks on several classifications tasks.
arXiv Detail & Related papers (2024-06-25T11:38:46Z) - Deep Domain Adaptation: A Sim2Real Neural Approach for Improving Eye-Tracking Systems [80.62854148838359]
Eye image segmentation is a critical step in eye tracking that has great influence over the final gaze estimate.
We use dimensionality-reduction techniques to measure the overlap between the target eye images and synthetic training data.
Our methods result in robust, improved performance when tackling the discrepancy between simulation and real-world data samples.
arXiv Detail & Related papers (2024-03-23T22:32:06Z) - Magnification Invariant Medical Image Analysis: A Comparison of
Convolutional Networks, Vision Transformers, and Token Mixers [2.3859625728972484]
Convolution Neural Networks (CNNs) are widely used in medical image analysis.
Their performance degrade when the magnification of testing images differ from the training images.
This study aims to evaluate the robustness of various deep learning architectures.
arXiv Detail & Related papers (2023-02-22T16:44:41Z) - A parameter refinement method for Ptychography based on Deep Learning
concepts [55.41644538483948]
coarse parametrisation in propagation distance, position errors and partial coherence frequently menaces the experiment viability.
A modern Deep Learning framework is used to correct autonomously the setup incoherences, thus improving the quality of a ptychography reconstruction.
We tested our system on both synthetic datasets and also on real data acquired at the TwinMic beamline of the Elettra synchrotron facility.
arXiv Detail & Related papers (2021-05-18T10:15:17Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Deep Low-Shot Learning for Biological Image Classification and
Visualization from Limited Training Samples [52.549928980694695]
In situ hybridization (ISH) gene expression pattern images from the same developmental stage are compared.
labeling training data with precise stages is very time-consuming even for biologists.
We propose a deep two-step low-shot learning framework to accurately classify ISH images using limited training images.
arXiv Detail & Related papers (2020-10-20T06:06:06Z) - Improved Slice-wise Tumour Detection in Brain MRIs by Computing
Dissimilarities between Latent Representations [68.8204255655161]
Anomaly detection for Magnetic Resonance Images (MRIs) can be solved with unsupervised methods.
We have proposed a slice-wise semi-supervised method for tumour detection based on the computation of a dissimilarity function in the latent space of a Variational AutoEncoder.
We show that by training the models on higher resolution images and by improving the quality of the reconstructions, we obtain results which are comparable with different baselines.
arXiv Detail & Related papers (2020-07-24T14:02:09Z) - Multi-element microscope optimization by a learned sensing network with
composite physical layers [3.2435888122704037]
Digital microscopes are used to capture images for automated interpretation by computer algorithms.
In this work, we investigate an approach to jointly optimize multiple microscope settings, together with a classification network.
We show that the network's resulting low-resolution microscope images (20X-comparable) offer a machine learning network sufficient contrast to match the classification performance of corresponding high-resolution imagery.
arXiv Detail & Related papers (2020-06-27T16:49:37Z) - A Spatially Constrained Deep Convolutional Neural Network for Nerve
Fiber Segmentation in Corneal Confocal Microscopic Images using Inaccurate
Annotations [10.761046991755311]
We propose a spatially constrained deep convolutional neural network (DCNN) to achieve smooth and robust image segmentation.
The proposed method has been evaluated based on corneal confocal microscopic ( CCM) images for nerve fiber segmentation.
arXiv Detail & Related papers (2020-04-20T16:56:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.