Learning Structurally Stabilized Representations for Multi-modal Lossless DNA Storage
- URL: http://arxiv.org/abs/2408.00779v1
- Date: Wed, 17 Jul 2024 06:31:49 GMT
- Title: Learning Structurally Stabilized Representations for Multi-modal Lossless DNA Storage
- Authors: Ben Cao, Tiantian He, Xue Li, Bin Wang, Xiaohu Wu, Qiang Zhang, Yew-Soon Ong,
- Abstract summary: Reed-Solomon coded single-stranded representation learning is a novel end-to-end model for learning representations for DNA storage.
In contrast to existing learning-based methods, the proposed RSRL is inspired by both error-correction and structural biology.
The experimental results obtained demonstrate that RSRL can store diverse types of data with much higher information density and durability but much lower error rates.
- Score: 32.00500955709341
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this paper, we present Reed-Solomon coded single-stranded representation learning (RSRL), a novel end-to-end model for learning representations for multi-modal lossless DNA storage. In contrast to existing learning-based methods, the proposed RSRL is inspired by both error-correction codec and structural biology. Specifically, RSRL first learns the representations for the subsequent storage from the binary data transformed by the Reed-Solomon codec. Then, the representations are masked by an RS-code-informed mask to focus on correcting the burst errors occurring in the learning process. With the decoded representations with error corrections, a novel biologically stabilized loss is formulated to regularize the data representations to possess stable single-stranded structures. By incorporating these novel strategies, the proposed RSRL can learn highly durable, dense, and lossless representations for the subsequent storage tasks into DNA sequences. The proposed RSRL has been compared with a number of strong baselines in real-world tasks of multi-modal data storage. The experimental results obtained demonstrate that RSRL can store diverse types of data with much higher information density and durability but much lower error rates.
Related papers
- Unlocking Potential Binders: Multimodal Pretraining DEL-Fusion for Denoising DNA-Encoded Libraries [51.72836644350993]
Multimodal Pretraining DEL-Fusion model (MPDF)
We develop pretraining tasks applying contrastive objectives between different compound representations and their text descriptions.
We propose a novel DEL-fusion framework that amalgamates compound information at the atomic, submolecular, and molecular levels.
arXiv Detail & Related papers (2024-09-07T17:32:21Z) - Compressing Deep Reinforcement Learning Networks with a Dynamic
Structured Pruning Method for Autonomous Driving [63.155562267383864]
Deep reinforcement learning (DRL) has shown remarkable success in complex autonomous driving scenarios.
DRL models inevitably bring high memory consumption and computation, which hinders their wide deployment in resource-limited autonomous driving devices.
We introduce a novel dynamic structured pruning approach that gradually removes a DRL model's unimportant neurons during the training stage.
arXiv Detail & Related papers (2024-02-07T09:00:30Z) - Implicit Neural Multiple Description for DNA-based data storage [6.423239719448169]
DNA exhibits remarkable potential as a data storage solution due to its impressive storage density and long-term stability.
However, developing this novel medium comes with its own set of challenges, particularly in addressing errors arising from storage and biological manipulations.
We have pioneered a novel compression scheme and a cutting-edge Multiple Description Coding (MDC) technique utilizing neural networks for DNA data storage.
arXiv Detail & Related papers (2023-09-13T13:42:52Z) - RPLHR-CT Dataset and Transformer Baseline for Volumetric
Super-Resolution from CT Scans [12.066026343488453]
coarse resolution may lead to difficulties in medical diagnosis by either physicians or computer-aided diagnosis algorithms.
Deep learning-based volumetric super-resolution (SR) methods are feasible ways to improve resolution.
This paper builds the first public real-paired dataset RPLHR-CT as a benchmark for volumetric SR.
Considering the inherent shortcoming of CNN, we also propose a transformer volumetric super-resolution network (TVSRN) based on attention mechanisms.
arXiv Detail & Related papers (2022-06-13T15:35:59Z) - Single-Read Reconstruction for DNA Data Storage Using Transformers [0.0]
We propose a novel approach for single-read reconstruction using an encoder-decoder Transformer architecture for DNA based data storage.
Our model achieves lower error rates when reconstructing the original data from a single read of each DNA strand.
This is the first demonstration of using deep learning models for single-read reconstruction in DNA based storage.
arXiv Detail & Related papers (2021-09-12T10:01:59Z) - DML-GANR: Deep Metric Learning With Generative Adversarial Network
Regularization for High Spatial Resolution Remote Sensing Image Retrieval [9.423185775609426]
We develop a deep metric learning approach with generative adversarial network regularization (DML-GANR) for HSR-RSI retrieval.
The experimental results on the three data sets demonstrate the superior performance of DML-GANR over state-of-the-art techniques in HSR-RSI retrieval.
arXiv Detail & Related papers (2020-10-07T02:26:03Z) - Understanding Self-supervised Learning with Dual Deep Networks [74.92916579635336]
We propose a novel framework to understand contrastive self-supervised learning (SSL) methods that employ dual pairs of deep ReLU networks.
We prove that in each SGD update of SimCLR with various loss functions, the weights at each layer are updated by a emphcovariance operator.
To further study what role the covariance operator plays and which features are learned in such a process, we model data generation and augmentation processes through a emphhierarchical latent tree model (HLTM)
arXiv Detail & Related papers (2020-10-01T17:51:49Z) - A Systematic Approach to Featurization for Cancer Drug Sensitivity
Predictions with Deep Learning [49.86828302591469]
We train >35,000 neural network models, sweeping over common featurization techniques.
We found the RNA-seq to be highly redundant and informative even with subsets larger than 128 features.
arXiv Detail & Related papers (2020-04-30T20:42:17Z) - Modal Regression based Structured Low-rank Matrix Recovery for
Multi-view Learning [70.57193072829288]
Low-rank Multi-view Subspace Learning has shown great potential in cross-view classification in recent years.
Existing LMvSL based methods are incapable of well handling view discrepancy and discriminancy simultaneously.
We propose Structured Low-rank Matrix Recovery (SLMR), a unique method of effectively removing view discrepancy and improving discriminancy.
arXiv Detail & Related papers (2020-03-22T03:57:38Z) - MS-Net: Multi-Site Network for Improving Prostate Segmentation with
Heterogeneous MRI Data [75.73881040581767]
We propose a novel multi-site network (MS-Net) for improving prostate segmentation by learning robust representations.
Our MS-Net improves the performance across all datasets consistently, and outperforms state-of-the-art methods for multi-site learning.
arXiv Detail & Related papers (2020-02-09T14:11:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.