Locally Adaptive Algorithms for Multiple Testing with Network Structure,
with Application to Genome-Wide Association Studies
- URL: http://arxiv.org/abs/2203.11461v4
- Date: Wed, 16 Aug 2023 22:54:54 GMT
- Title: Locally Adaptive Algorithms for Multiple Testing with Network Structure,
with Application to Genome-Wide Association Studies
- Authors: Ziyi Liang, T. Tony Cai, Wenguang Sun, Yin Xia
- Abstract summary: We propose a principled and generic framework for incorporating network data or multiple samples of auxiliary data from related source domains.
LASLA employs a $p$-value weighting approach, utilizing structural insights to assign data-driven weights to individual test points.
LASLA is illustrated through various synthetic experiments and an application to T2D-associated SNP identification.
- Score: 4.851566905442038
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Linkage analysis has provided valuable insights to the GWAS studies,
particularly in revealing that SNPs in linkage disequilibrium (LD) can jointly
influence disease phenotypes. However, the potential of LD network data has
often been overlooked or underutilized in the literature. In this paper, we
propose a locally adaptive structure learning algorithm (LASLA) that provides a
principled and generic framework for incorporating network data or multiple
samples of auxiliary data from related source domains; possibly in different
dimensions/structures and from diverse populations. LASLA employs a $p$-value
weighting approach, utilizing structural insights to assign data-driven weights
to individual test points. Theoretical analysis shows that LASLA can
asymptotically control FDR with independent or weakly dependent primary
statistics, and achieve higher power when the network data is informative.
Efficiency again of LASLA is illustrated through various synthetic experiments
and an application to T2D-associated SNP identification.
Related papers
- Representation-Enhanced Neural Knowledge Integration with Application to Large-Scale Medical Ontology Learning [3.010503480024405]
We propose a theoretically guaranteed statistical framework, called RENKI, to enable simultaneous learning of relation types.
The proposed framework incorporates representation learning output into initial entity embedding of a neural network that approximates the score function for the knowledge graph.
We demonstrate the effect of weighting in the presence of heterogeneous relations and the benefit of incorporating representation learning in nonparametric models.
arXiv Detail & Related papers (2024-10-09T21:38:48Z) - Physics Inspired Hybrid Attention for SAR Target Recognition [61.01086031364307]
We propose a physics inspired hybrid attention (PIHA) mechanism and the once-for-all (OFA) evaluation protocol to address the issues.
PIHA leverages the high-level semantics of physical information to activate and guide the feature group aware of local semantics of target.
Our method outperforms other state-of-the-art approaches in 12 test scenarios with same ASC parameters.
arXiv Detail & Related papers (2023-09-27T14:39:41Z) - Regularization Through Simultaneous Learning: A Case Study on Plant
Classification [0.0]
This paper introduces Simultaneous Learning, a regularization approach drawing on principles of Transfer Learning and Multi-task Learning.
We leverage auxiliary datasets with the target dataset, the UFOP-HVD, to facilitate simultaneous classification guided by a customized loss function.
Remarkably, our approach demonstrates superior performance over models without regularization.
arXiv Detail & Related papers (2023-05-22T19:44:57Z) - Consensus Knowledge Graph Learning via Multi-view Sparse Low Rank Block Model [8.374332740392978]
We propose a unified multi-view sparse low-rank block model (msLBM) framework, which enables simultaneous grouping and connectivity analysis.
Our results demonstrate that a consensus knowledge graph can be more accurately learned by leveraging multi-source datasets.
arXiv Detail & Related papers (2022-09-28T01:19:38Z) - Federated Offline Reinforcement Learning [55.326673977320574]
We propose a multi-site Markov decision process model that allows for both homogeneous and heterogeneous effects across sites.
We design the first federated policy optimization algorithm for offline RL with sample complexity.
We give a theoretical guarantee for the proposed algorithm, where the suboptimality for the learned policies is comparable to the rate as if data is not distributed.
arXiv Detail & Related papers (2022-06-11T18:03:26Z) - coVariance Neural Networks [119.45320143101381]
Graph neural networks (GNN) are an effective framework that exploit inter-relationships within graph-structured data for learning.
We propose a GNN architecture, called coVariance neural network (VNN), that operates on sample covariance matrices as graphs.
We show that VNN performance is indeed more stable than PCA-based statistical approaches.
arXiv Detail & Related papers (2022-05-31T15:04:43Z) - Do Deep Neural Networks Always Perform Better When Eating More Data? [82.6459747000664]
We design experiments from Identically Independent Distribution(IID) and Out of Distribution(OOD)
Under IID condition, the amount of information determines the effectivity of each sample, the contribution of samples and difference between classes determine the amount of class information.
Under OOD condition, the cross-domain degree of samples determine the contributions, and the bias-fitting caused by irrelevant elements is a significant factor of cross-domain.
arXiv Detail & Related papers (2022-05-30T15:40:33Z) - CRNNTL: convolutional recurrent neural network and transfer learning for
QSAR modelling [4.090810719630087]
We propose the convolutional recurrent neural network and transfer learning (CRNNTL) for QSAR modelling.
Our strategy takes advantages of both convolutional and recurrent neural networks for feature extraction, as well as the data augmentation method.
arXiv Detail & Related papers (2021-09-07T20:04:55Z) - Prequential MDL for Causal Structure Learning with Neural Networks [9.669269791955012]
We show that the prequential minimum description length principle can be used to derive a practical scoring function for Bayesian networks.
We obtain plausible and parsimonious graph structures without relying on sparsity inducing priors or other regularizers which must be tuned.
We discuss how the the prequential score relates to recent work that infers causal structure from the speed of adaptation when the observations come from a source undergoing distributional shift.
arXiv Detail & Related papers (2021-07-02T22:35:21Z) - FF-NSL: Feed-Forward Neural-Symbolic Learner [70.978007919101]
This paper introduces a neural-symbolic learning framework, called Feed-Forward Neural-Symbolic Learner (FF-NSL)
FF-NSL integrates state-of-the-art ILP systems based on the Answer Set semantics, with neural networks, in order to learn interpretable hypotheses from labelled unstructured data.
arXiv Detail & Related papers (2021-06-24T15:38:34Z) - Deep Representational Similarity Learning for analyzing neural
signatures in task-based fMRI dataset [81.02949933048332]
This paper develops Deep Representational Similarity Learning (DRSL), a deep extension of Representational Similarity Analysis (RSA)
DRSL is appropriate for analyzing similarities between various cognitive tasks in fMRI datasets with a large number of subjects.
arXiv Detail & Related papers (2020-09-28T18:30:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.