TEMImageNet Training Library and AtomSegNet Deep-Learning Models for
High-Precision Atom Segmentation, Localization, Denoising, and
Super-Resolution Processing of Atomic-Resolution Images
- URL: http://arxiv.org/abs/2012.09093v2
- Date: Sat, 20 Feb 2021 06:14:18 GMT
- Title: TEMImageNet Training Library and AtomSegNet Deep-Learning Models for
High-Precision Atom Segmentation, Localization, Denoising, and
Super-Resolution Processing of Atomic-Resolution Images
- Authors: Ruoqian Lin, Rui Zhang, Chunyang Wang, Xiao-Qing Yang, Huolin L. Xin
- Abstract summary: Atom segmentation and localization, noise reduction and deblurring of atomic-resolution scanning transmission electron microscopy (STEM) images is a challenging task.
Here, we report the development of a training library and a deep learning method that can perform robust and precise atom segmentation, localization, denoising, and super-resolution processing.
We have deployed our deep-learning models to a desktop app with a graphical user interface and the app is free and open-source.
- Score: 9.275446527724144
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Atom segmentation and localization, noise reduction and deblurring of
atomic-resolution scanning transmission electron microscopy (STEM) images with
high precision and robustness is a challenging task. Although several
conventional algorithms, such has thresholding, edge detection and clustering,
can achieve reasonable performance in some predefined sceneries, they tend to
fail when interferences from the background are strong and unpredictable.
Particularly, for atomic-resolution STEM images, so far there is no
well-established algorithm that is robust enough to segment or detect all
atomic columns when there is large thickness variation in a recorded image.
Herein, we report the development of a training library and a deep learning
method that can perform robust and precise atom segmentation, localization,
denoising, and super-resolution processing of experimental images. Despite
using simulated images as training datasets, the deep-learning model can
self-adapt to experimental STEM images and shows outstanding performance in
atom detection and localization in challenging contrast conditions and the
precision consistently outperforms the state-of-the-art two-dimensional
Gaussian fit method. Taking a step further, we have deployed our deep-learning
models to a desktop app with a graphical user interface and the app is free and
open-source. We have also built a TEM ImageNet project website for easy
browsing and downloading of the training data.
Related papers
- Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - Efficient Visual State Space Model for Image Deblurring [83.57239834238035]
Convolutional neural networks (CNNs) and Vision Transformers (ViTs) have achieved excellent performance in image restoration.
We propose a simple yet effective visual state space model (EVSSM) for image deblurring.
arXiv Detail & Related papers (2024-05-23T09:13:36Z) - An Ensemble Model for Distorted Images in Real Scenarios [0.0]
In this paper, we apply the object detector YOLOv7 to detect distorted images from the CDCOCO dataset.
Through carefully designed optimizations, our model achieves excellent performance on the CDCOCO test set.
Our denoising detection model can denoise and repair distorted images, making the model useful in a variety of real-world scenarios and environments.
arXiv Detail & Related papers (2023-09-26T15:12:55Z) - GDIP: Gated Differentiable Image Processing for Object-Detection in
Adverse Conditions [15.327704761260131]
We present a Gated Differentiable Image Processing (GDIP) block, a domain-agnostic network architecture.
Our proposed GDIP block learns to enhance images directly through the downstream object detection loss.
We demonstrate significant improvement in detection performance over several state-of-the-art methods.
arXiv Detail & Related papers (2022-09-29T16:43:13Z) - Multitask AET with Orthogonal Tangent Regularity for Dark Object
Detection [84.52197307286681]
We propose a novel multitask auto encoding transformation (MAET) model to enhance object detection in a dark environment.
In a self-supervision manner, the MAET learns the intrinsic visual structure by encoding and decoding the realistic illumination-degrading transformation.
We have achieved the state-of-the-art performance using synthetic and real-world datasets.
arXiv Detail & Related papers (2022-05-06T16:27:14Z) - Learning Enriched Features for Fast Image Restoration and Enhancement [166.17296369600774]
This paper presents a holistic goal of maintaining spatially-precise high-resolution representations through the entire network.
We learn an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
Our approach achieves state-of-the-art results for a variety of image processing tasks, including defocus deblurring, image denoising, super-resolution, and image enhancement.
arXiv Detail & Related papers (2022-04-19T17:59:45Z) - Image-specific Convolutional Kernel Modulation for Single Image
Super-resolution [85.09413241502209]
In this issue, we propose a novel image-specific convolutional modulation kernel (IKM)
We exploit the global contextual information of image or feature to generate an attention weight for adaptively modulating the convolutional kernels.
Experiments on single image super-resolution show that the proposed methods achieve superior performances over state-of-the-art methods.
arXiv Detail & Related papers (2021-11-16T11:05:10Z) - Sci-Net: a Scale Invariant Model for Building Detection from Aerial
Images [0.0]
We propose a Scale-invariant neural network (Sci-Net) that is able to segment buildings present in aerial images at different spatial resolutions.
Specifically, we modified the U-Net architecture and fused it with dense Atrous Spatial Pyramid Pooling (ASPP) to extract fine-grained multi-scale representations.
arXiv Detail & Related papers (2021-11-12T16:45:20Z) - On Deep Learning Techniques to Boost Monocular Depth Estimation for
Autonomous Navigation [1.9007546108571112]
Inferring the depth of images is a fundamental inverse problem within the field of Computer Vision.
We propose a new lightweight and fast supervised CNN architecture combined with novel feature extraction models.
We also introduce an efficient surface normals module, jointly with a simple geometric 2.5D loss function, to solve SIDE problems.
arXiv Detail & Related papers (2020-10-13T18:37:38Z) - Deep Attentive Generative Adversarial Network for Photo-Realistic Image
De-Quantization [25.805568996596783]
De-quantization can improve the visual quality of low bit-depth image to display on high bit-depth screen.
This paper proposes DAGAN algorithm to perform super-resolution on image intensity resolution.
DenseResAtt module consists of dense residual blocks armed with self-attention mechanism.
arXiv Detail & Related papers (2020-04-07T06:45:01Z) - Learning Enriched Features for Real Image Restoration and Enhancement [166.17296369600774]
convolutional neural networks (CNNs) have achieved dramatic improvements over conventional approaches for image restoration task.
We present a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network.
Our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
arXiv Detail & Related papers (2020-03-15T11:04:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.