DNA Origami Nanostructures Observed in Transmission Electron Microscopy Images can be Characterized through Convolutional Neural Networks
- URL: http://arxiv.org/abs/2503.10950v1
- Date: Thu, 13 Mar 2025 23:31:10 GMT
- Title: DNA Origami Nanostructures Observed in Transmission Electron Microscopy Images can be Characterized through Convolutional Neural Networks
- Authors: Xingfei Wei, Qiankun Mo, Chi Chen, Mark Bathe, Rigoberto Hernandez,
- Abstract summary: convolutional neural network (CNN) models can characterize DNA origami nanostructures employed in programmable self-assembling.<n>We benchmark the performance of 9 CNN models to characterize ligation number of DNA origami nanostructures in transmission electron microscopy (TEM) images.
- Score: 2.974235808849024
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial intelligence (AI) models remain an emerging strategy to accelerate materials design and development. We demonstrate that convolutional neural network (CNN) models can characterize DNA origami nanostructures employed in programmable self-assembling, which is important in many applications such as in biomedicine. Specifically, we benchmark the performance of 9 CNN models -- viz. AlexNet, GoogLeNet, VGG16, VGG19, ResNet18, ResNet34, ResNet50, ResNet101, and ResNet152 -- to characterize the ligation number of DNA origami nanostructures in transmission electron microscopy (TEM) images. We first pre-train CNN models using a large image dataset of 720 images from our coarse-grained (CG) molecular dynamics (MD) simulations. Then, we fine-tune the pre-trained CNN models, using a small experimental TEM dataset with 146 TEM images. All CNN models were found to have similar computational time requirements, while their model sizes and performances are different. We use 20 test MD images to demonstrate that among all of the pre-trained CNN models ResNet50 and VGG16 have the highest and second highest accuracies. Among the fine-tuned models, VGG16 was found to have the highest agreement on the test TEM images. Thus, we conclude that fine-tuned VGG16 models can quickly characterize the ligation number of nanostructures in large TEM images.
Related papers
- A Comparative Study of CNN, ResNet, and Vision Transformers for Multi-Classification of Chest Diseases [0.0]
Vision Transformers (ViT) are powerful tools due to their scalability and ability to process large amounts of data.
We fine-tuned two variants of ViT models, one pre-trained on ImageNet and another trained from scratch, using the NIH Chest X-ray dataset.
Our study evaluates the performance of these models in the multi-label classification of 14 distinct diseases.
arXiv Detail & Related papers (2024-05-31T23:56:42Z) - Unveiling the Unseen: Identifiable Clusters in Trained Depthwise
Convolutional Kernels [56.69755544814834]
Recent advances in depthwise-separable convolutional neural networks (DS-CNNs) have led to novel architectures.
This paper reveals another striking property of DS-CNN architectures: discernible and explainable patterns emerge in their trained depthwise convolutional kernels in all layers.
arXiv Detail & Related papers (2024-01-25T19:05:53Z) - Contextualizing MLP-Mixers Spatiotemporally for Urban Data Forecast at Scale [54.15522908057831]
We propose an adapted version of the computationally-Mixer for STTD forecast at scale.
Our results surprisingly show that this simple-yeteffective solution can rival SOTA baselines when tested on several traffic benchmarks.
Our findings contribute to the exploration of simple-yet-effective models for real-world STTD forecasting.
arXiv Detail & Related papers (2023-07-04T05:19:19Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - InternImage: Exploring Large-Scale Vision Foundation Models with
Deformable Convolutions [95.94629864981091]
This work presents a new large-scale CNN-based foundation model, termed InternImage, which can obtain the gain from increasing parameters and training data like ViTs.
The proposed InternImage reduces the strict inductive bias of traditional CNNs and makes it possible to learn stronger and more robust patterns with large-scale parameters from massive data like ViTs.
arXiv Detail & Related papers (2022-11-10T18:59:04Z) - ViSNet: an equivariant geometry-enhanced graph neural network with
vector-scalar interactive message passing for molecules [69.05950120497221]
We propose an equivariant geometry-enhanced graph neural network called ViSNet, which elegantly extracts geometric features and efficiently models molecular structures.
Our proposed ViSNet outperforms state-of-the-art approaches on multiple MD benchmarks, including MD17, revised MD17 and MD22, and achieves excellent chemical property prediction on QM9 and Molecule3D datasets.
arXiv Detail & Related papers (2022-10-29T07:12:46Z) - Multimodal registration of FISH and nanoSIMS images using convolutional
neural network models [14.71992435706872]
multimodal registration of FISH and nanoSIMS images is challenging given the morphological distortion and background noise in both images.
We use convolutional neural networks (CNNs) for multiscale feature extraction and shape context for computation of the minimum transformation cost feature matching.
arXiv Detail & Related papers (2022-01-14T16:35:10Z) - Classification of diffraction patterns using a convolutional neural
network in single particle imaging experiments performed at X-ray
free-electron lasers [53.65540150901678]
Single particle imaging (SPI) at X-ray free electron lasers (XFELs) is particularly well suited to determine the 3D structure of particles in their native environment.
For a successful reconstruction, diffraction patterns originating from a single hit must be isolated from a large number of acquired patterns.
We propose to formulate this task as an image classification problem and solve it using convolutional neural network (CNN) architectures.
arXiv Detail & Related papers (2021-12-16T17:03:14Z) - COVID-19 Electrocardiograms Classification using CNN Models [1.1172382217477126]
A novel approach is proposed to automatically diagnose the COVID-19 by the utilization of Electrocardiogram (ECG) data with the integration of deep learning algorithms.
CNN models have been utilized in this proposed framework, including VGG16, VGG19, InceptionResnetv2, InceptionV3, Resnet50, and Densenet201.
Our results show a relatively low accuracy in the rest of the models compared to the VGG16 model, which is due to the small size of the utilized dataset.
arXiv Detail & Related papers (2021-12-15T08:06:45Z) - A Battle of Network Structures: An Empirical Study of CNN, Transformer,
and MLP [121.35904748477421]
Convolutional neural networks (CNN) are the dominant deep neural network (DNN) architecture for computer vision.
Transformer and multi-layer perceptron (MLP)-based models, such as Vision Transformer and Vision-Mixer, started to lead new trends.
In this paper, we conduct empirical studies on these DNN structures and try to understand their respective pros and cons.
arXiv Detail & Related papers (2021-08-30T06:09:02Z) - An Interaction-based Convolutional Neural Network (ICNN) Towards Better
Understanding of COVID-19 X-ray Images [0.0]
We propose a novel Interaction-based Convolutional Neural Network (ICNN) that does not make assumptions about the relevance of local information.
We demonstrate that the proposed method produces state-of-the-art prediction performance of 99.8% on a real-world data set classifying COVID-19 Chest X-ray images.
arXiv Detail & Related papers (2021-06-13T04:41:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.