Precision Neural Networks: Joint Graph And Relational Learning
- URL: http://arxiv.org/abs/2509.14821v1
- Date: Thu, 18 Sep 2025 10:22:05 GMT
- Title: Precision Neural Networks: Joint Graph And Relational Learning
- Authors: Andrea Cavallo, Samuel Rey, Antonio G. Marques, Elvin Isufi,
- Abstract summary: CoVariance Neural Networks (VNNs) perform convolutions on the graph determined by the covariance matrix of the data.<n>We study Precision Neural Networks (PNNs) on the precision matrix -- the inverse covariance.<n>We formulate an optimization problem that jointly learns the network parameters and the precision matrix, and solve it via alternating optimization.
- Score: 36.05842226689587
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: CoVariance Neural Networks (VNNs) perform convolutions on the graph determined by the covariance matrix of the data, which enables expressive and stable covariance-based learning. However, covariance matrices are typically dense, fail to encode conditional independence, and are often precomputed in a task-agnostic way, which may hinder performance. To overcome these limitations, we study Precision Neural Networks (PNNs), i.e., VNNs on the precision matrix -- the inverse covariance. The precision matrix naturally encodes statistical independence, often exhibits sparsity, and preserves the covariance spectral structure. To make precision estimation task-aware, we formulate an optimization problem that jointly learns the network parameters and the precision matrix, and solve it via alternating optimization, by sequentially updating the network weights and the precision estimate. We theoretically bound the distance between the estimated and true precision matrices at each iteration, and demonstrate the effectiveness of joint estimation compared to two-step approaches on synthetic and real-world data.
Related papers
- Covariance Density Neural Networks [5.4141465747474475]
Graph neural networks have re-defined how we model and predict on network data.<n>There lacks a consensus on choosing the correct underlying graph structure on which to model signals.<n>We show that our model can achieve strong performance in subject-independent Brain Computer Interface EEG motor imagery classification.
arXiv Detail & Related papers (2025-05-16T11:38:13Z) - Neural Conformal Control for Time Series Forecasting [54.96087475179419]
We introduce a neural network conformal prediction method for time series that enhances adaptivity in non-stationary environments.<n>Our approach acts as a neural controller designed to achieve desired target coverage, leveraging auxiliary multi-view data with neural network encoders.<n>We empirically demonstrate significant improvements in coverage and probabilistic accuracy, and find that our method is the only one that combines good calibration with consistency in prediction intervals.
arXiv Detail & Related papers (2024-12-24T03:56:25Z) - Symmetry Discovery for Different Data Types [52.2614860099811]
Equivariant neural networks incorporate symmetries into their architecture, achieving higher generalization performance.
We propose LieSD, a method for discovering symmetries via trained neural networks which approximate the input-output mappings of the tasks.
We validate the performance of LieSD on tasks with symmetries such as the two-body problem, the moment of inertia matrix prediction, and top quark tagging.
arXiv Detail & Related papers (2024-10-13T13:39:39Z) - Efficient learning of differential network in multi-source non-paranormal graphical models [2.5905193932831585]
This paper addresses learning of sparse structural changes or differential network between two classes of non-paranormal graphical models.
Our strategy in combining datasets from multiple sources is shown to be very effective in inferring differential network in real-world problems.
arXiv Detail & Related papers (2024-10-03T13:59:38Z) - Sparse Covariance Neural Networks [15.616852692528594]
We show that Sparse coVariance Neural Networks (S-VNNs) are more stable than nominal VNNs.
We show an improved task performance, stability, and computational efficiency of S-VNNs compared with nominal VNNs.
arXiv Detail & Related papers (2024-10-02T15:37:12Z) - Neural Tangent Kernels Motivate Graph Neural Networks with
Cross-Covariance Graphs [94.44374472696272]
We investigate NTKs and alignment in the context of graph neural networks (GNNs)
Our results establish the theoretical guarantees on the optimality of the alignment for a two-layer GNN.
These guarantees are characterized by the graph shift operator being a function of the cross-covariance between the input and the output data.
arXiv Detail & Related papers (2023-10-16T19:54:21Z) - Robust Online Covariance and Sparse Precision Estimation Under Arbitrary
Data Corruption [1.5850859526672516]
We introduce a modified trimmed-inner-product algorithm to robustly estimate the covariance in an online scenario.
We provide the error-bound and convergence properties of the estimates to the true precision matrix under our algorithms.
arXiv Detail & Related papers (2023-09-16T05:37:28Z) - coVariance Neural Networks [119.45320143101381]
Graph neural networks (GNN) are an effective framework that exploit inter-relationships within graph-structured data for learning.
We propose a GNN architecture, called coVariance neural network (VNN), that operates on sample covariance matrices as graphs.
We show that VNN performance is indeed more stable than PCA-based statistical approaches.
arXiv Detail & Related papers (2022-05-31T15:04:43Z) - Meta Learning Low Rank Covariance Factors for Energy-Based Deterministic
Uncertainty [58.144520501201995]
Bi-Lipschitz regularization of neural network layers preserve relative distances between data instances in the feature spaces of each layer.
With the use of an attentive set encoder, we propose to meta learn either diagonal or diagonal plus low-rank factors to efficiently construct task specific covariance matrices.
We also propose an inference procedure which utilizes scaled energy to achieve a final predictive distribution.
arXiv Detail & Related papers (2021-10-12T22:04:19Z) - Sparse PCA via $l_{2,p}$-Norm Regularization for Unsupervised Feature
Selection [138.97647716793333]
We propose a simple and efficient unsupervised feature selection method, by combining reconstruction error with $l_2,p$-norm regularization.
We present an efficient optimization algorithm to solve the proposed unsupervised model, and analyse the convergence and computational complexity of the algorithm theoretically.
arXiv Detail & Related papers (2020-12-29T04:08:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.