A Kernel-Based Neural Network Test for High-dimensional Sequencing Data
Analysis
- URL: http://arxiv.org/abs/2312.02850v2
- Date: Wed, 6 Dec 2023 04:42:58 GMT
- Title: A Kernel-Based Neural Network Test for High-dimensional Sequencing Data
Analysis
- Authors: Tingting Hou, Chang Jiang and Qing Lu
- Abstract summary: We introduce a new kernel-based neural network (KNN) test for complex association analysis of sequencing data.
Based on KNN, a Wald-type test is then introduced to evaluate the joint association of high-dimensional genetic data with a disease phenotype of interest.
- Score: 0.8221435109014762
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The recent development of artificial intelligence (AI) technology, especially
the advance of deep neural network (DNN) technology, has revolutionized many
fields. While DNN plays a central role in modern AI technology, it has been
rarely used in sequencing data analysis due to challenges brought by
high-dimensional sequencing data (e.g., overfitting). Moreover, due to the
complexity of neural networks and their unknown limiting distributions,
building association tests on neural networks for genetic association analysis
remains a great challenge. To address these challenges and fill the important
gap of using AI in high-dimensional sequencing data analysis, we introduce a
new kernel-based neural network (KNN) test for complex association analysis of
sequencing data. The test is built on our previously developed KNN framework,
which uses random effects to model the overall effects of high-dimensional
genetic data and adopts kernel-based neural network structures to model complex
genotype-phenotype relationships. Based on KNN, a Wald-type test is then
introduced to evaluate the joint association of high-dimensional genetic data
with a disease phenotype of interest, considering non-linear and non-additive
effects (e.g., interaction effects). Through simulations, we demonstrated that
our proposed method attained higher power compared to the sequence kernel
association test (SKAT), especially in the presence of non-linear and
interaction effects. Finally, we apply the methods to the whole genome
sequencing (WGS) dataset from the Alzheimer's Disease Neuroimaging Initiative
(ADNI) study, investigating new genes associated with the hippocampal volume
change over time.
Related papers
- An Association Test Based on Kernel-Based Neural Networks for Complex
Genetic Association Analysis [0.8221435109014762]
We develop a kernel-based neural network model (KNN) that synergizes the strengths of linear mixed models with conventional neural networks.
MINQUE-based test to assess the joint association of genetic variants with the phenotype.
Two additional tests to evaluate and interpret linear and non-linear/non-additive genetic effects.
arXiv Detail & Related papers (2023-12-06T05:02:28Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - A Sieve Quasi-likelihood Ratio Test for Neural Networks with
Applications to Genetic Association Studies [0.32771631221674324]
We propose a quasi sieve-likelihood ratio test based on NN with one hidden layer for testing complex associations.
The validity of the distribution is investigated via simulations.
We demonstrate the use of the proposed test by performing a genetic association analysis of the sequencing data from Alzheimer's Disease Neuroimaging Initiative (ADNI)
arXiv Detail & Related papers (2022-12-16T02:54:46Z) - Asymptotic-Preserving Neural Networks for hyperbolic systems with
diffusive scaling [0.0]
We show how Asymptotic-Preserving Neural Networks (APNNs) provide considerably better results with respect to the different scales of the problem when compared with standard DNNs and PINNs.
arXiv Detail & Related papers (2022-10-17T13:30:34Z) - Physically constrained neural networks to solve the inverse problem for
neuron models [0.29005223064604074]
Systems biology and systems neurophysiology are powerful tools for a number of key applications in the biomedical sciences.
Recent developments in the field of deep neural networks have demonstrated the possibility of formulating nonlinear, universal approximators.
arXiv Detail & Related papers (2022-09-24T12:51:15Z) - Extrapolation and Spectral Bias of Neural Nets with Hadamard Product: a
Polynomial Net Study [55.12108376616355]
The study on NTK has been devoted to typical neural network architectures, but is incomplete for neural networks with Hadamard products (NNs-Hp)
In this work, we derive the finite-width-K formulation for a special class of NNs-Hp, i.e., neural networks.
We prove their equivalence to the kernel regression predictor with the associated NTK, which expands the application scope of NTK.
arXiv Detail & Related papers (2022-09-16T06:36:06Z) - Deep Architecture Connectivity Matters for Its Convergence: A
Fine-Grained Analysis [94.64007376939735]
We theoretically characterize the impact of connectivity patterns on the convergence of deep neural networks (DNNs) under gradient descent training.
We show that by a simple filtration on "unpromising" connectivity patterns, we can trim down the number of models to evaluate.
arXiv Detail & Related papers (2022-05-11T17:43:54Z) - SIT: A Bionic and Non-Linear Neuron for Spiking Neural Network [12.237928453571636]
Spiking Neural Networks (SNNs) have piqued researchers' interest because of their capacity to process temporal information and low power consumption.
Current state-of-the-art methods limited their biological plausibility and performance because their neurons are generally built on the simple Leaky-Integrate-and-Fire (LIF) model.
Due to the high level of dynamic complexity, modern neuron models have seldom been implemented in SNN practice.
arXiv Detail & Related papers (2022-03-30T07:50:44Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Measuring Model Complexity of Neural Networks with Curve Activation
Functions [100.98319505253797]
We propose the linear approximation neural network (LANN) to approximate a given deep model with curve activation function.
We experimentally explore the training process of neural networks and detect overfitting.
We find that the $L1$ and $L2$ regularizations suppress the increase of model complexity.
arXiv Detail & Related papers (2020-06-16T07:38:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.