Ultrafast Focus Detection for Automated Microscopy
- URL: http://arxiv.org/abs/2108.12050v1
- Date: Thu, 26 Aug 2021 22:24:41 GMT
- Title: Ultrafast Focus Detection for Automated Microscopy
- Authors: Maksim Levental, Ryan Chard, Gregg A. Wildenberg
- Abstract summary: We present a fast out-of-focus detection algorithm for electron microscopy images collected serially.
Our technique, Multi-scale Histologic Feature Detection, adapts classical computer vision techniques and is based on detecting various fine-grained histologic features.
Tests are performed that demonstrate near-real-time detection of out-of-focus conditions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances in scientific instruments have resulted in dramatic increase
in the volumes and velocities of data being generated in every-day
laboratories. Scanning electron microscopy is one such example where
technological advancements are now overwhelming scientists with critical data
for montaging, alignment, and image segmentation -- key practices for many
scientific domains, including, for example, neuroscience, where they are used
to derive the anatomical relationships of the brain. These instruments now
necessitate equally advanced computing resources and techniques to realize
their full potential. Here we present a fast out-of-focus detection algorithm
for electron microscopy images collected serially and demonstrate that it can
be used to provide near-real time quality control for neurology research. Our
technique, Multi-scale Histologic Feature Detection, adapts classical computer
vision techniques and is based on detecting various fine-grained histologic
features. We further exploit the inherent parallelism in the technique by
employing GPGPU primitives in order to accelerate characterization. Tests are
performed that demonstrate near-real-time detection of out-of-focus conditions.
We deploy these capabilities as a funcX function and show that it can be
applied as data are collected using an automated pipeline . We discuss
extensions that enable scaling out to support multi-beam microscopes and
integration with existing focus systems for purposes of implementing
auto-focus.
Related papers
- Gravitational cell detection and tracking in fluorescence microscopy
data [0.18828620190012021]
We present a novel approach based on gravitational force fields that can compete with, and potentially outperform modern machine learning models.
This method includes detection, segmentation, and tracking elements, with the results demonstrated on a Cell Tracking Challenge dataset.
arXiv Detail & Related papers (2023-12-06T14:08:05Z) - Robotic Navigation Autonomy for Subretinal Injection via Intelligent
Real-Time Virtual iOCT Volume Slicing [88.99939660183881]
We propose a framework for autonomous robotic navigation for subretinal injection.
Our method consists of an instrument pose estimation method, an online registration between the robotic and the i OCT system, and trajectory planning tailored for navigation to an injection target.
Our experiments on ex-vivo porcine eyes demonstrate the precision and repeatability of the method.
arXiv Detail & Related papers (2023-01-17T21:41:21Z) - Microscopy is All You Need [0.0]
We argue that a promising pathway for the development of machine learning methods is via the route of domain-specific deployable algorithms.
This will benefit both fundamental physical studies and serve as a test bed for more complex autonomous systems such as robotics and manufacturing.
arXiv Detail & Related papers (2022-10-12T18:41:40Z) - Bayesian Active Learning for Scanning Probe Microscopy: from Gaussian
Processes to Hypothesis Learning [0.0]
We discuss the basic principles of Bayesian active learning and illustrate its applications for scanning probe microscopes (SPMs)
These frameworks allow for the use of prior data, the discovery of specific functionalities as encoded in spectral data, and exploration of physical laws manifesting during the experiment.
arXiv Detail & Related papers (2022-05-30T23:01:41Z) - Learning multi-scale functional representations of proteins from
single-cell microscopy data [77.34726150561087]
We show that simple convolutional networks trained on localization classification can learn protein representations that encapsulate diverse functional information.
We also propose a robust evaluation strategy to assess quality of protein representations across different scales of biological function.
arXiv Detail & Related papers (2022-05-24T00:00:07Z) - Human-in-the-Loop Disinformation Detection: Stance, Sentiment, or
Something Else? [93.91375268580806]
Both politics and pandemics have recently provided ample motivation for the development of machine learning-enabled disinformation (a.k.a. fake news) detection algorithms.
Existing literature has focused primarily on the fully-automated case, but the resulting techniques cannot reliably detect disinformation on the varied topics, sources, and time scales required for military applications.
By leveraging an already-available analyst as a human-in-the-loop, canonical machine learning techniques of sentiment analysis, aspect-based sentiment analysis, and stance detection become plausible methods to use for a partially-automated disinformation detection system.
arXiv Detail & Related papers (2021-11-09T13:30:34Z) - An Automated Scanning Transmission Electron Microscope Guided by Sparse
Data Analytics [0.0]
We discuss the design of a closed-loop instrument control platform guided by emerging sparse data analytics.
We demonstrate how a centralized controller, informed by machine learning combining limited $a$ $priori$ knowledge and task-based discrimination, can drive on-the-fly experimental decision-making.
arXiv Detail & Related papers (2021-09-30T00:25:35Z) - A parameter refinement method for Ptychography based on Deep Learning
concepts [55.41644538483948]
coarse parametrisation in propagation distance, position errors and partial coherence frequently menaces the experiment viability.
A modern Deep Learning framework is used to correct autonomously the setup incoherences, thus improving the quality of a ptychography reconstruction.
We tested our system on both synthetic datasets and also on real data acquired at the TwinMic beamline of the Elettra synchrotron facility.
arXiv Detail & Related papers (2021-05-18T10:15:17Z) - Relational Graph Learning on Visual and Kinematics Embeddings for
Accurate Gesture Recognition in Robotic Surgery [84.73764603474413]
We propose a novel online approach of multi-modal graph network (i.e., MRG-Net) to dynamically integrate visual and kinematics information.
The effectiveness of our method is demonstrated with state-of-the-art results on the public JIGSAWS dataset.
arXiv Detail & Related papers (2020-11-03T11:00:10Z) - Towards an Automatic Analysis of CHO-K1 Suspension Growth in
Microfluidic Single-cell Cultivation [63.94623495501023]
We propose a novel Machine Learning architecture, which allows us to infuse a neural deep network with human-powered abstraction on the level of data.
Specifically, we train a generative model simultaneously on natural and synthetic data, so that it learns a shared representation, from which a target variable, such as the cell count, can be reliably estimated.
arXiv Detail & Related papers (2020-10-20T08:36:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.