Approximate kNN Classification for Biomedical Data
- URL: http://arxiv.org/abs/2012.02149v1
- Date: Thu, 3 Dec 2020 18:30:43 GMT
- Title: Approximate kNN Classification for Biomedical Data
- Authors: Panagiotis Anagnostou, Petros T. Barmbas, Aristidis G. Vrahatis and
Sotiris K. Tasoulis
- Abstract summary: Single-cell RNA-seq (scRNA-seq) is an emerging DNA sequencing technology with promising capabilities but significant computational challenges.
We propose the utilization of approximate nearest neighbor search algorithms for the task of kNN classification in scRNA-seq data.
- Score: 1.1852406625172218
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We are in the era where the Big Data analytics has changed the way of
interpreting the various biomedical phenomena, and as the generated data
increase, the need for new machine learning methods to handle this evolution
grows. An indicative example is the single-cell RNA-seq (scRNA-seq), an
emerging DNA sequencing technology with promising capabilities but significant
computational challenges due to the large-scaled generated data. Regarding the
classification process for scRNA-seq data, an appropriate method is the k
Nearest Neighbor (kNN) classifier since it is usually utilized for large-scale
prediction tasks due to its simplicity, minimal parameterization, and
model-free nature. However, the ultra-high dimensionality that characterizes
scRNA-seq impose a computational bottleneck, while prediction power can be
affected by the "Curse of Dimensionality". In this work, we proposed the
utilization of approximate nearest neighbor search algorithms for the task of
kNN classification in scRNA-seq data focusing on a particular methodology
tailored for high dimensional data. We argue that even relaxed approximate
solutions will not affect the prediction performance significantly. The
experimental results confirm the original assumption by offering the potential
for broader applicability.
Related papers
- An Investigation on Machine Learning Predictive Accuracy Improvement and Uncertainty Reduction using VAE-based Data Augmentation [2.517043342442487]
Deep generative learning uses certain ML models to learn the underlying distribution of existing data and generate synthetic samples that resemble the real data.
In this study, our objective is to evaluate the effectiveness of data augmentation using variational autoencoder (VAE)-based deep generative models.
We investigated whether the data augmentation leads to improved accuracy in the predictions of a deep neural network (DNN) model trained using the augmented data.
arXiv Detail & Related papers (2024-10-24T18:15:48Z) - Minimally Supervised Learning using Topological Projections in
Self-Organizing Maps [55.31182147885694]
We introduce a semi-supervised learning approach based on topological projections in self-organizing maps (SOMs)
Our proposed method first trains SOMs on unlabeled data and then a minimal number of available labeled data points are assigned to key best matching units (BMU)
Our results indicate that the proposed minimally supervised model significantly outperforms traditional regression techniques.
arXiv Detail & Related papers (2024-01-12T22:51:48Z) - Splicing Up Your Predictions with RNA Contrastive Learning [4.35360799431127]
We extend contrastive learning techniques to genomic data by utilizing similarities between functional sequences generated through alternative splicing gene duplication.
We validate their utility on downstream tasks such as RNA half-life and mean ribosome load prediction.
Our exploration of the learned latent space reveals that our contrastive objective yields semantically meaningful representations.
arXiv Detail & Related papers (2023-10-12T21:51:25Z) - scHyena: Foundation Model for Full-Length Single-Cell RNA-Seq Analysis
in Brain [46.39828178736219]
We introduce scHyena, a foundation model designed to address these challenges and enhance the accuracy of scRNA-seq analysis in the brain.
scHyena is equipped with a linear adaptor layer, the positional encoding via gene-embedding, and a bidirectional Hyena operator.
This enables us to process full-length scRNA-seq data without losing any information from the raw data.
arXiv Detail & Related papers (2023-10-04T10:30:08Z) - IB-UQ: Information bottleneck based uncertainty quantification for
neural function regression and neural operator learning [11.5992081385106]
We propose a novel framework for uncertainty quantification via information bottleneck (IB-UQ) for scientific machine learning tasks.
We incorporate the bottleneck by a confidence-aware encoder, which encodes inputs into latent representations according to the confidence of the input data.
We also propose a data augmentation based information bottleneck objective which can enhance the quality of the extrapolation uncertainty.
arXiv Detail & Related papers (2023-02-07T05:56:42Z) - Optirank: classification for RNA-Seq data with optimal ranking reference
genes [0.0]
We propose a logistic regression model, optirank, which learns simultaneously the parameters of the model and the genes to use as a reference set in the ranking.
We also consider real classification tasks, which present different kinds of distribution shifts between train and test data.
arXiv Detail & Related papers (2023-01-11T10:49:06Z) - Invariance Learning in Deep Neural Networks with Differentiable Laplace
Approximations [76.82124752950148]
We develop a convenient gradient-based method for selecting the data augmentation.
We use a differentiable Kronecker-factored Laplace approximation to the marginal likelihood as our objective.
arXiv Detail & Related papers (2022-02-22T02:51:11Z) - Convolutional generative adversarial imputation networks for
spatio-temporal missing data in storm surge simulations [86.5302150777089]
Generative Adversarial Imputation Nets (GANs) and GAN-based techniques have attracted attention as unsupervised machine learning methods.
We name our proposed method as Con Conval Generative Adversarial Imputation Nets (Conv-GAIN)
arXiv Detail & Related papers (2021-11-03T03:50:48Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - A Systematic Approach to Featurization for Cancer Drug Sensitivity
Predictions with Deep Learning [49.86828302591469]
We train >35,000 neural network models, sweeping over common featurization techniques.
We found the RNA-seq to be highly redundant and informative even with subsets larger than 128 features.
arXiv Detail & Related papers (2020-04-30T20:42:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.