Deep Neural Network: An Efficient and Optimized Machine Learning
Paradigm for Reducing Genome Sequencing Error
- URL: http://arxiv.org/abs/2010.03420v1
- Date: Tue, 6 Oct 2020 08:16:35 GMT
- Title: Deep Neural Network: An Efficient and Optimized Machine Learning
Paradigm for Reducing Genome Sequencing Error
- Authors: Ferdinand Kartriku, Dr. Robert Sowah and Charles Saah
- Abstract summary: It has become known that most of the platforms used in the sequencing process produce significant errors.
On the two main types of genome errors - substitution and indels - our work is focused on correcting indels.
A deep learning approach was used to correct the errors in sequencing the chosen dataset.
- Score: 27.84400682210533
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Genomic data I used in many fields but, it has become known that most of the
platforms used in the sequencing process produce significant errors. This means
that the analysis and inferences generated from these data may have some errors
that need to be corrected. On the two main types of genome errors -
substitution and indels - our work is focused on correcting indels. A deep
learning approach was used to correct the errors in sequencing the chosen
dataset
Related papers
- Subtle Errors Matter: Preference Learning via Error-injected Self-editing [59.405145971637204]
We propose a novel preference learning framework called eRror-Injected Self-Editing (RISE)
RISE injects predefined subtle errors into partial tokens of correct solutions to construct hard pairs for error mitigation.
Experiments validate the effectiveness of RISE, with preference learning on Qwen2-7B-Instruct yielding notable improvements of 3.0% on GSM8K and 7.9% on MATH.
arXiv Detail & Related papers (2024-10-09T07:43:38Z) - Parameter-tuning-free data entry error unlearning with adaptive
selective synaptic dampening [51.34904967046097]
We introduce an extension to the selective synaptic dampening unlearning method that removes the need for parameter tuning.
We demonstrate the performance of this extension, adaptive selective synaptic dampening (ASSD) on various ResNet18 and Vision Transformer unlearning tasks.
The application of this approach is particularly compelling in industrial settings, such as supply chain management.
arXiv Detail & Related papers (2024-02-06T14:04:31Z) - Episodic Gaussian Process-Based Learning Control with Vanishing Tracking
Errors [10.627020714408445]
We develop an episodic approach for learning GP models, such that an arbitrary tracking accuracy can be guaranteed.
The effectiveness of the derived theory is demonstrated in several simulations.
arXiv Detail & Related papers (2023-07-10T08:43:28Z) - Empirical Analysis of the AdaBoost's Error Bound [0.0]
This study empirically verified the error bound of the AdaBoost algorithm for both synthetic and real-world data.
The results show that the error bound holds up in practice, demonstrating its efficiency and importance to a variety of applications.
arXiv Detail & Related papers (2023-02-02T05:03:21Z) - Invariance Learning in Deep Neural Networks with Differentiable Laplace
Approximations [76.82124752950148]
We develop a convenient gradient-based method for selecting the data augmentation.
We use a differentiable Kronecker-factored Laplace approximation to the marginal likelihood as our objective.
arXiv Detail & Related papers (2022-02-22T02:51:11Z) - Evaluating State-of-the-Art Classification Models Against Bayes
Optimality [106.50867011164584]
We show that we can compute the exact Bayes error of generative models learned using normalizing flows.
We use our approach to conduct a thorough investigation of state-of-the-art classification models.
arXiv Detail & Related papers (2021-06-07T06:21:20Z) - Tail-to-Tail Non-Autoregressive Sequence Prediction for Chinese
Grammatical Error Correction [49.25830718574892]
We present a new framework named Tail-to-Tail (textbfTtT) non-autoregressive sequence prediction.
Considering that most tokens are correct and can be conveyed directly from source to target, and the error positions can be estimated and corrected.
Experimental results on standard datasets, especially on the variable-length datasets, demonstrate the effectiveness of TtT in terms of sentence-level Accuracy, Precision, Recall, and F1-Measure.
arXiv Detail & Related papers (2021-06-03T05:56:57Z) - An Introduction to Robust Graph Convolutional Networks [71.68610791161355]
We propose a novel Robust Graph Convolutional Neural Networks for possible erroneous single-view or multi-view data.
By incorporating an extra layers via Autoencoders into traditional graph convolutional networks, we characterize and handle typical error models explicitly.
arXiv Detail & Related papers (2021-03-27T04:47:59Z) - Correcting the Autocorrect: Context-Aware Typographical Error Correction
via Training Data Augmentation [38.10429793534442]
We first draw on a small set of annotated data to compute spelling error statistics.
These are then invoked to introduce errors into substantially larger corpora.
We use it to create a set of English language error detection and correction datasets.
arXiv Detail & Related papers (2020-05-03T18:08:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.