Deep DNA Storage: Scalable and Robust DNA Storage via Coding Theory and
Deep Learning
- URL: http://arxiv.org/abs/2109.00031v3
- Date: Mon, 11 Mar 2024 18:11:50 GMT
- Title: Deep DNA Storage: Scalable and Robust DNA Storage via Coding Theory and
Deep Learning
- Authors: Daniella Bar-Lev, Itai Orr, Omer Sabary, Tuvi Etzion, Eitan Yaakobi
- Abstract summary: We show a modular and holistic approach that combines Deep Neural Networks (DNN) trained on simulated data, Product (TP) based Error-Correcting Codes (ECC) and a safety margin into a single coherent pipeline.
Our work improves upon the current leading solutions by up to x3200 increase in speed, 40% improvement in accuracy, and offers a code rate of 1.6 bits per base in a high noise regime.
- Score: 49.3231734733112
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: DNA-based storage is an emerging technology that enables digital information
to be archived in DNA molecules. This method enjoys major advantages over
magnetic and optical storage solutions such as exceptional information density,
enhanced data durability, and negligible power consumption to maintain data
integrity. To access the data, an information retrieval process is employed,
where some of the main bottlenecks are the scalability and accuracy, which have
a natural tradeoff between the two. Here we show a modular and holistic
approach that combines Deep Neural Networks (DNN) trained on simulated data,
Tensor-Product (TP) based Error-Correcting Codes (ECC), and a safety margin
mechanism into a single coherent pipeline. We demonstrated our solution on
3.1MB of information using two different sequencing technologies. Our work
improves upon the current leading solutions by up to x3200 increase in speed,
40% improvement in accuracy, and offers a code rate of 1.6 bits per base in a
high noise regime. In a broader sense, our work shows a viable path to
commercial DNA storage solutions hindered by current information retrieval
processes.
Related papers
- BiDense: Binarization for Dense Prediction [62.70804353158387]
BiDense is a generalized binary neural network (BNN) designed for efficient and accurate dense prediction tasks.
BiDense incorporates two key techniques: the Distribution-adaptive Binarizer (DAB) and the Channel-adaptive Full-precision Bypass (CFB)
arXiv Detail & Related papers (2024-11-15T16:46:04Z) - SemAI: Semantic Artificial Intelligence-enhanced DNA storage for Internet-of-Things [9.858497777817522]
This paper introduces a Semantic Artificial Intelligence-enhanced DNA storage (SemAI-DNA) paradigm, distinguishing itself from prevalent deep learning-based methodologies.
Numerical results demonstrate the SemAI-DNA's efficacy, attaining 2.61 dB Peak Signal-to-Noise Ratio (PSNR) gain and 0.13 improvement in Structural Similarity Index (SSIM) over conventional deep learning-based approaches.
arXiv Detail & Related papers (2024-09-18T12:21:58Z) - Implicit Neural Multiple Description for DNA-based data storage [6.423239719448169]
DNA exhibits remarkable potential as a data storage solution due to its impressive storage density and long-term stability.
However, developing this novel medium comes with its own set of challenges, particularly in addressing errors arising from storage and biological manipulations.
We have pioneered a novel compression scheme and a cutting-edge Multiple Description Coding (MDC) technique utilizing neural networks for DNA data storage.
arXiv Detail & Related papers (2023-09-13T13:42:52Z) - Low-Energy Convolutional Neural Networks (CNNs) using Hadamard Method [0.0]
Convolutional neural networks (CNNs) are a potential approach for object recognition and detection.
A new approach based on the Hadamard transformation as an alternative to the convolution operation is demonstrated.
The method is helpful for other computer vision tasks when the kernel size is smaller than the input image size.
arXiv Detail & Related papers (2022-09-06T21:36:57Z) - Dynamic Network-Assisted D2D-Aided Coded Distributed Learning [59.29409589861241]
We propose a novel device-to-device (D2D)-aided coded federated learning method (D2D-CFL) for load balancing across devices.
We derive an optimal compression rate for achieving minimum processing time and establish its connection with the convergence time.
Our proposed method is beneficial for real-time collaborative applications, where the users continuously generate training data.
arXiv Detail & Related papers (2021-11-26T18:44:59Z) - Distribution-sensitive Information Retention for Accurate Binary Neural
Network [49.971345958676196]
We present a novel Distribution-sensitive Information Retention Network (DIR-Net) to retain the information of the forward activations and backward gradients.
Our DIR-Net consistently outperforms the SOTA binarization approaches under mainstream and compact architectures.
We conduct our DIR-Net on real-world resource-limited devices which achieves 11.1 times storage saving and 5.4 times speedup.
arXiv Detail & Related papers (2021-09-25T10:59:39Z) - Quantized Neural Networks via {-1, +1} Encoding Decomposition and
Acceleration [83.84684675841167]
We propose a novel encoding scheme using -1, +1 to decompose quantized neural networks (QNNs) into multi-branch binary networks.
We validate the effectiveness of our method on large-scale image classification, object detection, and semantic segmentation tasks.
arXiv Detail & Related papers (2021-06-18T03:11:15Z) - A reconfigurable neural network ASIC for detector front-end data
compression at the HL-LHC [0.40690419770123604]
A neural network autoencoder model can be implemented in a radiation tolerant ASIC to perform lossy data compression.
This is the first radiation tolerant on-detector ASIC implementation of a neural network that has been designed for particle physics applications.
arXiv Detail & Related papers (2021-05-04T18:06:23Z) - Efficient approximation of DNA hybridisation using deep learning [0.0]
We present the first comprehensive study of machine learning methods applied to the task of predicting DNA hybridisation.
We introduce a synthetic hybridisation dataset of over 2.5 million data points, enabling the use of a wide range of machine learning algorithms.
arXiv Detail & Related papers (2021-02-19T19:23:49Z) - Neural Network Compression for Noisy Storage Devices [71.4102472611862]
Conventionally, model compression and physical storage are decoupled.
This approach forces the storage to treat each bit of the compressed model equally, and to dedicate the same amount of resources to each bit.
We propose a radically different approach that: (i) employs analog memories to maximize the capacity of each memory cell, and (ii) jointly optimize model compression and physical storage to maximize memory utility.
arXiv Detail & Related papers (2021-02-15T18:19:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.