Global Voxel Transformer Networks for Augmented Microscopy
- URL: http://arxiv.org/abs/2008.02340v2
- Date: Mon, 23 Nov 2020 16:45:20 GMT
- Title: Global Voxel Transformer Networks for Augmented Microscopy
- Authors: Zhengyang Wang, Yaochen Xie, Shuiwang Ji
- Abstract summary: We introduce global voxel transformer networks (GVTNets), an advanced deep learning tool for augmented microscopy.
GVTNets are built on global voxel transformer operators (GVTOs), which are able to aggregate global information.
We apply the proposed methods on existing datasets for three different augmented microscopy tasks under various settings.
- Score: 54.730707387866076
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Advances in deep learning have led to remarkable success in augmented
microscopy, enabling us to obtain high-quality microscope images without using
expensive microscopy hardware and sample preparation techniques. However,
current deep learning models for augmented microscopy are mostly U-Net based
neural networks, thus sharing certain drawbacks that limit the performance. In
this work, we introduce global voxel transformer networks (GVTNets), an
advanced deep learning tool for augmented microscopy that overcomes intrinsic
limitations of the current U-Net based models and achieves improved
performance. GVTNets are built on global voxel transformer operators (GVTOs),
which are able to aggregate global information, as opposed to local operators
like convolutions. We apply the proposed methods on existing datasets for three
different augmented microscopy tasks under various settings. The performance is
significantly and consistently better than previous U-Net based approaches.
Related papers
- Learning Universal Predictors [23.18743879588599]
We explore the potential of amortizing the most powerful universal predictor, namely Solomonoff Induction (SI), into neural networks via leveraging meta-learning to its limits.
We use Universal Turing Machines (UTMs) to generate training data used to expose networks to a broad range of patterns.
Our results suggest that UTM data is a valuable resource for meta-learning, and that it can be used to train neural networks capable of learning universal prediction strategies.
arXiv Detail & Related papers (2024-01-26T15:37:16Z) - Affine-Consistent Transformer for Multi-Class Cell Nuclei Detection [76.11864242047074]
We propose a novel Affine-Consistent Transformer (AC-Former), which directly yields a sequence of nucleus positions.
We introduce an Adaptive Affine Transformer (AAT) module, which can automatically learn the key spatial transformations to warp original images for local network training.
Experimental results demonstrate that the proposed method significantly outperforms existing state-of-the-art algorithms on various benchmarks.
arXiv Detail & Related papers (2023-10-22T02:27:02Z) - Leveraging generative adversarial networks to create realistic scanning
transmission electron microscopy images [2.5954872177280346]
Machine learning could revolutionize materials research through autonomous data collection and processing.
We employ a cycle generative adversarial network (CycleGAN) with a reciprocal space discriminator to augment simulated data with realistic spatial frequency information.
We showcase our approach by training a fully convolutional network (FCN) to identify single atom defects in a 4.5 million atom data set.
arXiv Detail & Related papers (2023-01-18T19:19:27Z) - Towards Augmented Microscopy with Reinforcement Learning-Enhanced
Workflows [0.0]
We develop a virtual environment to test and develop a network to autonomously align the electron beam without prior knowledge.
We deploy a successful model on the microscope to validate the approach and demonstrate the value of designing appropriate virtual environments.
Overall, the results highlight that by taking advantage of RL, microscope operations can be automated without the need for extensive algorithm design.
arXiv Detail & Related papers (2022-08-04T20:02:21Z) - UniNet: Unified Architecture Search with Convolution, Transformer, and
MLP [62.401161377258234]
In this paper, we propose to jointly search the optimal combination of convolution, transformer, and COCO for building a series of all-operator network architectures.
We identify that the widely-used strided convolution or pooling based down-sampling modules become the performance bottlenecks when operators are combined to form a network.
To better tackle the global context captured by the transformer and operators, we propose two novel context-aware down-sampling modules.
arXiv Detail & Related papers (2021-10-08T11:09:40Z) - Global Filter Networks for Image Classification [90.81352483076323]
We present a conceptually simple yet computationally efficient architecture that learns long-term spatial dependencies in the frequency domain with log-linear complexity.
Our results demonstrate that GFNet can be a very competitive alternative to transformer-style models and CNNs in efficiency, generalization ability and robustness.
arXiv Detail & Related papers (2021-07-01T17:58:16Z) - Programmable 3D snapshot microscopy with Fourier convolutional networks [3.2156268397508314]
3D snapshot microscopy enables volumetric imaging as fast as a camera allows by capturing a 3D volume in a single 2D camera image.
We introduce a class of global kernel Fourier convolutional neural networks which can efficiently integrate the globally mixed information encoded in a 3D snapshot image.
arXiv Detail & Related papers (2021-04-21T16:09:56Z) - Towards an Automatic Analysis of CHO-K1 Suspension Growth in
Microfluidic Single-cell Cultivation [63.94623495501023]
We propose a novel Machine Learning architecture, which allows us to infuse a neural deep network with human-powered abstraction on the level of data.
Specifically, we train a generative model simultaneously on natural and synthetic data, so that it learns a shared representation, from which a target variable, such as the cell count, can be reliably estimated.
arXiv Detail & Related papers (2020-10-20T08:36:51Z) - Multi-element microscope optimization by a learned sensing network with
composite physical layers [3.2435888122704037]
Digital microscopes are used to capture images for automated interpretation by computer algorithms.
In this work, we investigate an approach to jointly optimize multiple microscope settings, together with a classification network.
We show that the network's resulting low-resolution microscope images (20X-comparable) offer a machine learning network sufficient contrast to match the classification performance of corresponding high-resolution imagery.
arXiv Detail & Related papers (2020-06-27T16:49:37Z) - From Federated to Fog Learning: Distributed Machine Learning over
Heterogeneous Wireless Networks [71.23327876898816]
Federated learning has emerged as a technique for training ML models at the network edge by leveraging processing capabilities across the nodes that collect the data.
We advocate a new learning paradigm called fog learning which will intelligently distribute ML model training across the continuum of nodes from edge devices to cloud servers.
arXiv Detail & Related papers (2020-06-07T05:11:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.