Transfer learning for tensor Gaussian graphical models
- URL: http://arxiv.org/abs/2211.09391v1
- Date: Thu, 17 Nov 2022 07:53:07 GMT
- Title: Transfer learning for tensor Gaussian graphical models
- Authors: Mingyang Ren, Yaoming Zhen and Junhui Wang
- Abstract summary: We propose a transfer learning framework for tensor GGMs, which takes full advantage of informative auxiliary domains.
Our theoretical analysis shows substantial improvement of estimation errors and variable selection consistency.
- Score: 0.6767885381740952
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tensor Gaussian graphical models (GGMs), interpreting conditional
independence structures within tensor data, have important applications in
numerous areas. Yet, the available tensor data in one single study is often
limited due to high acquisition costs. Although relevant studies can provide
additional data, it remains an open question how to pool such heterogeneous
data. In this paper, we propose a transfer learning framework for tensor GGMs,
which takes full advantage of informative auxiliary domains even when
non-informative auxiliary domains are present, benefiting from the carefully
designed data-adaptive weights. Our theoretical analysis shows substantial
improvement of estimation errors and variable selection consistency on the
target domain under much relaxed conditions, by leveraging information from
auxiliary domains. Extensive numerical experiments are conducted on both
synthetic tensor graphs and a brain functional connectivity network data, which
demonstrates the satisfactory performance of the proposed method.
Related papers
- Localized Gaussians as Self-Attention Weights for Point Clouds Correspondence [92.07601770031236]
We investigate semantically meaningful patterns in the attention heads of an encoder-only Transformer architecture.
We find that fixing the attention weights not only accelerates the training process but also enhances the stability of the optimization.
arXiv Detail & Related papers (2024-09-20T07:41:47Z) - Unifying Invariant and Variant Features for Graph Out-of-Distribution via Probability of Necessity and Sufficiency [18.564387153282293]
We propose exploiting Probability of Necessity and Sufficiency (PNS) to extract sufficient and necessary invariant substructures.
We also leverage the domain variant subgraphs related to the labels to boost the generalization performance in an ensemble manner.
Experimental results demonstrate that our SNIGL model outperforms the state-of-the-art techniques on six public benchmarks.
arXiv Detail & Related papers (2024-07-21T21:35:01Z) - Generative Expansion of Small Datasets: An Expansive Graph Approach [13.053285552524052]
We introduce an Expansive Synthesis model generating large-scale, information-rich datasets from minimal samples.
An autoencoder with self-attention layers and optimal transport refines distributional consistency.
Results show comparable performance, demonstrating the model's potential to augment training data effectively.
arXiv Detail & Related papers (2024-06-25T02:59:02Z) - Provable Tensor Completion with Graph Information [49.08648842312456]
We introduce a novel model, theory, and algorithm for solving the dynamic graph regularized tensor completion problem.
We develop a comprehensive model simultaneously capturing the low-rank and similarity structure of the tensor.
In terms of theory, we showcase the alignment between the proposed graph smoothness regularization and a weighted tensor nuclear norm.
arXiv Detail & Related papers (2023-10-04T02:55:10Z) - Under-Counted Tensor Completion with Neural Incorporation of Attributes [18.21165063142917]
Under-counted tensor completion (UC-TC) is well-motivated for many data analytics tasks.
A low-rank Poisson tensor model with an expressive unknown nonlinear side information extractor is proposed for under-counted multi-aspect data.
A joint low-rank tensor completion and neural network learning algorithm is designed to recover the model.
To our best knowledge, the result is the first to offer theoretical supports for under-counted multi-aspect data completion.
arXiv Detail & Related papers (2023-06-05T21:45:23Z) - CHALLENGER: Training with Attribution Maps [63.736435657236505]
We show that utilizing attribution maps for training neural networks can improve regularization of models and thus increase performance.
In particular, we show that our generic domain-independent approach yields state-of-the-art results in vision, natural language processing and on time series tasks.
arXiv Detail & Related papers (2022-05-30T13:34:46Z) - Equivariance Allows Handling Multiple Nuisance Variables When Analyzing
Pooled Neuroimaging Datasets [53.34152466646884]
In this paper, we show how bringing recent results on equivariant representation learning instantiated on structured spaces together with simple use of classical results on causal inference provides an effective practical solution.
We demonstrate how our model allows dealing with more than one nuisance variable under some assumptions and can enable analysis of pooled scientific datasets in scenarios that would otherwise entail removing a large portion of the samples.
arXiv Detail & Related papers (2022-03-29T04:54:06Z) - Towards Open-World Feature Extrapolation: An Inductive Graph Learning
Approach [80.8446673089281]
We propose a new learning paradigm with graph representation and learning.
Our framework contains two modules: 1) a backbone network (e.g., feedforward neural nets) as a lower model takes features as input and outputs predicted labels; 2) a graph neural network as an upper model learns to extrapolate embeddings for new features via message passing over a feature-data graph built from observed data.
arXiv Detail & Related papers (2021-10-09T09:02:45Z) - Efficient Multidimensional Functional Data Analysis Using Marginal
Product Basis Systems [2.4554686192257424]
We propose a framework for learning continuous representations from a sample of multidimensional functional data.
We show that the resulting estimation problem can be solved efficiently by the tensor decomposition.
We conclude with a real data application in neuroimaging.
arXiv Detail & Related papers (2021-07-30T16:02:15Z) - Efficient Robustness Certificates for Discrete Data: Sparsity-Aware
Randomized Smoothing for Graphs, Images and More [85.52940587312256]
We propose a model-agnostic certificate based on the randomized smoothing framework which subsumes earlier work and is tight, efficient, and sparsity-aware.
We show the effectiveness of our approach on a wide variety of models, datasets, and tasks -- specifically highlighting its use for Graph Neural Networks.
arXiv Detail & Related papers (2020-08-29T10:09:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.