SpanSeq: Similarity-based sequence data splitting method for improved
development and assessment of deep learning projects
- URL: http://arxiv.org/abs/2402.14482v2
- Date: Tue, 5 Mar 2024 12:02:46 GMT
- Title: SpanSeq: Similarity-based sequence data splitting method for improved
development and assessment of deep learning projects
- Authors: Alfred Ferrer Florensa, Jose Juan Almagro Armenteros, Henrik Nielsen,
Frank M{\o}ller Aarestrup, Philip Thomas Lanken Conradsen Clausen
- Abstract summary: We present SpanSeq, a database partition method for machine learning that can scale to most biological sequences.
We also explore the effect of not restraining similarity between sets by reproducing the development of the state-of-the-art model DeepLoc.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The use of deep learning models in computational biology has increased
massively in recent years, and is expected to do so further with the current
advances in fields like Natural Language Processing. These models, although
able to draw complex relations between input and target, are also largely
inclined to learn noisy deviations from the pool of data used during their
development. In order to assess their performance on unseen data (their
capacity to generalize), it is common to randomly split the available data in
development (train/validation) and test sets. This procedure, although
standard, has lately been shown to produce dubious assessments of
generalization due to the existing similarity between samples in the databases
used. In this work, we present SpanSeq, a database partition method for machine
learning that can scale to most biological sequences (genes, proteins and
genomes) in order to avoid data leakage between sets. We also explore the
effect of not restraining similarity between sets by reproducing the
development of the state-of-the-art model DeepLoc, not only confirming the
consequences of randomly splitting databases on the model assessment, but
expanding those repercussions to the model development. SpanSeq is available
for downloading and installing at
https://github.com/genomicepidemiology/SpanSeq.
Related papers
- Diffusion posterior sampling for simulation-based inference in tall data settings [53.17563688225137]
Simulation-based inference ( SBI) is capable of approximating the posterior distribution that relates input parameters to a given observation.
In this work, we consider a tall data extension in which multiple observations are available to better infer the parameters of the model.
We compare our method to recently proposed competing approaches on various numerical experiments and demonstrate its superiority in terms of numerical stability and computational cost.
arXiv Detail & Related papers (2024-04-11T09:23:36Z) - FissionFusion: Fast Geometric Generation and Hierarchical Souping for Medical Image Analysis [0.7751705157998379]
The scarcity of well-annotated medical datasets requires leveraging transfer learning from broader datasets like ImageNet or pre-trained models like CLIP.
Model soups averages multiple fine-tuned models aiming to improve performance on In-Domain (ID) tasks and enhance robustness against Out-of-Distribution (OOD) datasets.
We propose a hierarchical merging approach that involves local and global aggregation of models at various levels.
arXiv Detail & Related papers (2024-03-20T06:48:48Z) - Learning Discretized Bayesian Networks with GOMEA [0.0]
We extend an existing state-of-the-art structure learning approach to jointly learn variable discretizations.
We show how this enables incorporating expert knowledge in a uniquely insightful fashion, finding multiple DBNs that trade-off complexity, accuracy, and the difference with a pre-determined expert network.
arXiv Detail & Related papers (2024-02-19T14:29:35Z) - Deep Ensembles Meets Quantile Regression: Uncertainty-aware Imputation
for Time Series [49.992908221544624]
Time series data often exhibit numerous missing values, which is the time series imputation task.
Previous deep learning methods have been shown to be effective for time series imputation.
We propose a non-generative time series imputation method that produces accurate imputations with inherent uncertainty.
arXiv Detail & Related papers (2023-12-03T05:52:30Z) - The Languini Kitchen: Enabling Language Modelling Research at Different
Scales of Compute [66.84421705029624]
We introduce an experimental protocol that enables model comparisons based on equivalent compute, measured in accelerator hours.
We pre-process an existing large, diverse, and high-quality dataset of books that surpasses existing academic benchmarks in quality, diversity, and document length.
This work also provides two baseline models: a feed-forward model derived from the GPT-2 architecture and a recurrent model in the form of a novel LSTM with ten-fold throughput.
arXiv Detail & Related papers (2023-09-20T10:31:17Z) - Learning to Jump: Thinning and Thickening Latent Counts for Generative
Modeling [69.60713300418467]
Learning to jump is a general recipe for generative modeling of various types of data.
We demonstrate when learning to jump is expected to perform comparably to learning to denoise, and when it is expected to perform better.
arXiv Detail & Related papers (2023-05-28T05:38:28Z) - VertiBayes: Learning Bayesian network parameters from vertically partitioned data with missing values [2.9707233220536313]
Federated learning makes it possible to train a machine learning model on decentralized data.
We propose a novel method called VertiBayes to train Bayesian networks on vertically partitioned data.
We experimentally show our approach produces models comparable to those learnt using traditional algorithms.
arXiv Detail & Related papers (2022-10-31T11:13:35Z) - Learning from aggregated data with a maximum entropy model [73.63512438583375]
We show how a new model, similar to a logistic regression, may be learned from aggregated data only by approximating the unobserved feature distribution with a maximum entropy hypothesis.
We present empirical evidence on several public datasets that the model learned this way can achieve performances comparable to those of a logistic model trained with the full unaggregated data.
arXiv Detail & Related papers (2022-10-05T09:17:27Z) - Bayesian predictive modeling of multi-source multi-way data [0.0]
We consider molecular data from multiple 'omics sources as predictors of early-life iron deficiency (ID) in a rhesus monkey model.
We use a linear model with a low-rank structure on the coefficients to capture multi-way dependence.
We show that our model performs as expected in terms of misclassification rates and correlation of estimated coefficients with true coefficients.
arXiv Detail & Related papers (2022-08-05T21:58:23Z) - Empirical evaluation of shallow and deep learning classifiers for Arabic
sentiment analysis [1.1172382217477126]
This work presents a detailed comparison of the performance of deep learning models for sentiment analysis of Arabic reviews.
The datasets used in this study are multi-dialect Arabic hotel and book review datasets, which are some of the largest publicly available datasets for Arabic reviews.
Results showed deep learning outperforming shallow learning for binary and multi-label classification, in contrast with the results of similar work reported in the literature.
arXiv Detail & Related papers (2021-12-01T14:45:43Z) - On the Discrepancy between Density Estimation and Sequence Generation [92.70116082182076]
log-likelihood is highly correlated with BLEU when we consider models within the same family.
We observe no correlation between rankings of models across different families.
arXiv Detail & Related papers (2020-02-17T20:13:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.