Design of a Graphical User Interface for Few-Shot Machine Learning
Classification of Electron Microscopy Data
- URL: http://arxiv.org/abs/2107.10387v1
- Date: Wed, 21 Jul 2021 23:02:33 GMT
- Title: Design of a Graphical User Interface for Few-Shot Machine Learning
Classification of Electron Microscopy Data
- Authors: Christina Doty, Shaun Gallagher, Wenqi Cui, Wenya Chen, Shweta
Bhushan, Marjolein Oostrom, Sarah Akers, Steven R. Spurgeon
- Abstract summary: We develop a Python-based graphical user interface that enables end users to easily conduct and visualize the output of few-shot learning models.
This interface is lightweight and can be hosted locally or on the web, providing the opportunity to reproducibly conduct, share, and crowd-source few-shot analyses.
- Score: 0.23453441553817042
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The recent growth in data volumes produced by modern electron microscopes
requires rapid, scalable, and flexible approaches to image segmentation and
analysis. Few-shot machine learning, which can richly classify images from a
handful of user-provided examples, is a promising route to high-throughput
analysis. However, current command-line implementations of such approaches can
be slow and unintuitive to use, lacking the real-time feedback necessary to
perform effective classification. Here we report on the development of a
Python-based graphical user interface that enables end users to easily conduct
and visualize the output of few-shot learning models. This interface is
lightweight and can be hosted locally or on the web, providing the opportunity
to reproducibly conduct, share, and crowd-source few-shot analyses.
Related papers
- Evaluating how interactive visualizations can assist in finding samples where and how computer vision models make mistakes [1.76602679361245]
We present two interactive visualizations in the context of Sprite, a system for creating Computer Vision (CV) models.
We study how these visualizations help Sprite's users identify (evaluate) and select (plan) images where a model is struggling and can lead to improved performance.
arXiv Detail & Related papers (2023-05-19T14:43:00Z) - Slideflow: Deep Learning for Digital Histopathology with Real-Time
Whole-Slide Visualization [49.62449457005743]
We develop a flexible deep learning library for histopathology called Slideflow.
It supports a broad array of deep learning methods for digital pathology.
It includes a fast whole-slide interface for deploying trained models.
arXiv Detail & Related papers (2023-04-09T02:49:36Z) - Interactive Visual Feature Search [8.255656003475268]
We introduce Visual Feature Search, a novel interactive visualization that is adaptable to any CNN.
Our tool allows a user to highlight an image region and search for images from a given dataset with the most similar model features.
We demonstrate how our tool elucidates different aspects of model behavior by performing experiments on a range of applications, such as in medical imaging and wildlife classification.
arXiv Detail & Related papers (2022-11-28T04:39:03Z) - Meta Internal Learning [88.68276505511922]
Internal learning for single-image generation is a framework, where a generator is trained to produce novel images based on a single image.
We propose a meta-learning approach that enables training over a collection of images, in order to model the internal statistics of the sample image more effectively.
Our results show that the models obtained are as suitable as single-image GANs for many common image applications.
arXiv Detail & Related papers (2021-10-06T16:27:38Z) - Model-Agnostic Graph Regularization for Few-Shot Learning [60.64531995451357]
We present a comprehensive study on graph embedded few-shot learning.
We introduce a graph regularization approach that allows a deeper understanding of the impact of incorporating graph information between labels.
Our approach improves the performance of strong base learners by up to 2% on Mini-ImageNet and 6.7% on ImageNet-FS.
arXiv Detail & Related papers (2021-02-14T05:28:13Z) - Towards an Automatic Analysis of CHO-K1 Suspension Growth in
Microfluidic Single-cell Cultivation [63.94623495501023]
We propose a novel Machine Learning architecture, which allows us to infuse a neural deep network with human-powered abstraction on the level of data.
Specifically, we train a generative model simultaneously on natural and synthetic data, so that it learns a shared representation, from which a target variable, such as the cell count, can be reliably estimated.
arXiv Detail & Related papers (2020-10-20T08:36:51Z) - From ImageNet to Image Classification: Contextualizing Progress on
Benchmarks [99.19183528305598]
We study how specific design choices in the ImageNet creation process impact the fidelity of the resulting dataset.
Our analysis pinpoints how a noisy data collection pipeline can lead to a systematic misalignment between the resulting benchmark and the real-world task it serves as a proxy for.
arXiv Detail & Related papers (2020-05-22T17:39:16Z) - HistomicsML2.0: Fast interactive machine learning for whole slide
imaging data [6.396738205632676]
HistomicsML2.0 enables rapid learn-by-example training of machine learning classifiers for detection of histologic patterns in whole-slide imaging datasets.
HistomicsML2.0 uses convolutional networks to be readily adaptable to a variety of applications, provides a web-based user interface, and is available as a software container to simplify deployment.
arXiv Detail & Related papers (2020-01-30T20:10:26Z) - A System for Real-Time Interactive Analysis of Deep Learning Training [66.06880335222529]
Currently available systems are limited to monitoring only the logged data that must be specified before the training process starts.
We present a new system that enables users to perform interactive queries on live processes generating real-time information.
arXiv Detail & Related papers (2020-01-05T11:33:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.