A Tutorial on the Mathematical Model of Single Cell Variational
Inference
- URL: http://arxiv.org/abs/2101.00650v1
- Date: Sun, 3 Jan 2021 16:02:36 GMT
- Title: A Tutorial on the Mathematical Model of Single Cell Variational
Inference
- Authors: Songting Shi
- Abstract summary: This tutorial will introduce the the mathematical model of the single cell variational inference (scVI)
It was written for beginners in the simple and intuitive way with many deduction details to encourage more researchers into this field.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As the large amount of sequencing data accumulated in past decades and it is
still accumulating, we need to handle the more and more sequencing data. As the
fast development of the computing technologies, we now can handle a large
amount of data by a reasonable of time using the neural network based model.
This tutorial will introduce the the mathematical model of the single cell
variational inference (scVI), which use the variational auto-encoder (building
on the neural networks) to learn the distribution of the data to gain insights.
It was written for beginners in the simple and intuitive way with many
deduction details to encourage more researchers into this field.
Related papers
- A Dynamical Model of Neural Scaling Laws [79.59705237659547]
We analyze a random feature model trained with gradient descent as a solvable model of network training and generalization.
Our theory shows how the gap between training and test loss can gradually build up over time due to repeated reuse of data.
arXiv Detail & Related papers (2024-02-02T01:41:38Z) - Learning Active Subspaces and Discovering Important Features with Gaussian Radial Basis Functions Neural Networks [0.0]
We show that precious information is contained in the spectrum of the precision matrix that can be extracted once the training of the model is completed.
We conducted numerical experiments for regression, classification, and feature selection tasks.
Our results demonstrate that the proposed model does not only yield an attractive prediction performance compared to the competitors.
arXiv Detail & Related papers (2023-07-11T09:54:30Z) - On Inductive Biases for Machine Learning in Data Constrained Settings [0.0]
This thesis explores a different answer to the problem of learning expressive models in data constrained settings.
Instead of relying on big datasets to learn neural networks, we will replace some modules by known functions reflecting the structure of the data.
Our approach falls under the hood of "inductive biases", which can be defined as hypothesis on the data at hand restricting the space of models to explore.
arXiv Detail & Related papers (2023-02-21T14:22:01Z) - An Information-Theoretic Analysis of Compute-Optimal Neural Scaling Laws [24.356906682593532]
We study the compute-optimal trade-off between model and training data set sizes for large neural networks.
Our result suggests a linear relation similar to that supported by the empirical analysis of chinchilla.
arXiv Detail & Related papers (2022-12-02T18:46:41Z) - Transfer Learning with Deep Tabular Models [66.67017691983182]
We show that upstream data gives tabular neural networks a decisive advantage over GBDT models.
We propose a realistic medical diagnosis benchmark for tabular transfer learning.
We propose a pseudo-feature method for cases where the upstream and downstream feature sets differ.
arXiv Detail & Related papers (2022-06-30T14:24:32Z) - Generative time series models using Neural ODE in Variational
Autoencoders [0.0]
We implement Neural Ordinary Differential Equations in a Variational Autoencoder setting for generative time series modeling.
An object-oriented approach to the code was taken to allow for easier development and research.
arXiv Detail & Related papers (2022-01-12T14:38:11Z) - A Light-weight Interpretable CompositionalNetwork for Nuclei Detection
and Weakly-supervised Segmentation [10.196621315018884]
Deep neural networks usually require large numbers of annotated data to train vast parameters.
We propose to build a data-efficient model, which only requires partial annotation, specifically on isolated nucleus.
arXiv Detail & Related papers (2021-10-26T16:44:08Z) - Reasoning-Modulated Representations [85.08205744191078]
We study a common setting where our task is not purely opaque.
Our approach paves the way for a new class of data-efficient representation learning.
arXiv Detail & Related papers (2021-07-19T13:57:13Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z) - Category-Learning with Context-Augmented Autoencoder [63.05016513788047]
Finding an interpretable non-redundant representation of real-world data is one of the key problems in Machine Learning.
We propose a novel method of using data augmentations when training autoencoders.
We train a Variational Autoencoder in such a way, that it makes transformation outcome predictable by auxiliary network.
arXiv Detail & Related papers (2020-10-10T14:04:44Z) - Generalized Leverage Score Sampling for Neural Networks [82.95180314408205]
Leverage score sampling is a powerful technique that originates from theoretical computer science.
In this work, we generalize the results in [Avron, Kapralov, Musco, Musco, Velingker and Zandieh 17] to a broader class of kernels.
arXiv Detail & Related papers (2020-09-21T14:46:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.