Hypergraph reconstruction from network data
- URL: http://arxiv.org/abs/2008.04948v4
- Date: Fri, 14 Jan 2022 00:55:53 GMT
- Title: Hypergraph reconstruction from network data
- Authors: Jean-Gabriel Young, Giovanni Petri, Tiago P. Peixoto
- Abstract summary: We introduce a Bayesian approach to reconstruct latent higher-order interactions from ordinary pairwise network data.
Our method is based on the principle of parsimony and only includes higher-order structures when there is sufficient statistical evidence for them.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Networks can describe the structure of a wide variety of complex systems by
specifying which pairs of entities in the system are connected. While such
pairwise representations are flexible, they are not necessarily appropriate
when the fundamental interactions involve more than two entities at the same
time. Pairwise representations nonetheless remain ubiquitous, because
higher-order interactions are often not recorded explicitly in network data.
Here, we introduce a Bayesian approach to reconstruct latent higher-order
interactions from ordinary pairwise network data. Our method is based on the
principle of parsimony and only includes higher-order structures when there is
sufficient statistical evidence for them. We demonstrate its applicability to a
wide range of datasets, both synthetic and empirical.
Related papers
- Rel-HNN: Split Parallel Hypergraph Neural Network for Learning on Relational Databases [3.6423651166048874]
Flattening the database poses challenges for deep learning models.<n>We propose a novel hypergraph-based framework, that we call rel-HNN.<n>We show that rel-HNN significantly outperforms existing methods in both classification and regression tasks.
arXiv Detail & Related papers (2025-07-16T18:20:45Z) - Broad Spectrum Structure Discovery in Large-Scale Higher-Order Networks [1.7273380623090848]
We introduce a class of probabilistic models that efficiently represents and discovers a broad spectrum of mesoscale structure in large-scale hypergraphs.<n>By modeling observed node interactions through latent interactions among classes using low-rank representations, our approach tractably captures rich structural patterns.<n>Our model improves link prediction over state-of-the-art methods and discovers interpretable structures in diverse real-world systems.
arXiv Detail & Related papers (2025-05-27T20:34:58Z) - How Compositional Generalization and Creativity Improve as Diffusion Models are Trained [82.08869888944324]
How many samples do generative models need in order to learn composition rules?<n>What signal in the data is exploited to learn those rules?<n>We discuss connections between the hierarchical clustering mechanism we introduce here and the renormalization group in physics.
arXiv Detail & Related papers (2025-02-17T18:06:33Z) - Defining Neural Network Architecture through Polytope Structures of Dataset [53.512432492636236]
This paper defines upper and lower bounds for neural network widths, which are informed by the polytope structure of the dataset in question.
We develop an algorithm to investigate a converse situation where the polytope structure of a dataset can be inferred from its corresponding trained neural networks.
It is established that popular datasets such as MNIST, Fashion-MNIST, and CIFAR10 can be efficiently encapsulated using no more than two polytopes with a small number of faces.
arXiv Detail & Related papers (2024-02-04T08:57:42Z) - Learnable Pillar-based Re-ranking for Image-Text Retrieval [119.9979224297237]
Image-text retrieval aims to bridge the modality gap and retrieve cross-modal content based on semantic similarities.
Re-ranking, a popular post-processing practice, has revealed the superiority of capturing neighbor relations in single-modality retrieval tasks.
We propose a novel learnable pillar-based re-ranking paradigm for image-text retrieval.
arXiv Detail & Related papers (2023-04-25T04:33:27Z) - Bayesian Detection of Mesoscale Structures in Pathway Data on Graphs [0.0]
mesoscale structures are integral part of the abstraction and analysis of complex systems.
They can represent communities in social or citation networks, roles in corporate interactions, or core-periphery structures in transportation networks.
We derive a Bayesian approach that simultaneously models the optimal partitioning of nodes in groups and the optimal higher-order network dynamics.
arXiv Detail & Related papers (2023-01-16T12:45:33Z) - On Neural Architecture Inductive Biases for Relational Tasks [76.18938462270503]
We introduce a simple architecture based on similarity-distribution scores which we name Compositional Network generalization (CoRelNet)
We find that simple architectural choices can outperform existing models in out-of-distribution generalizations.
arXiv Detail & Related papers (2022-06-09T16:24:01Z) - Learning Dynamics and Structure of Complex Systems Using Graph Neural
Networks [13.509027957413409]
We trained graph neural networks to fit time series from an example nonlinear dynamical system.
We found simple interpretations of the learned representation and model components.
We successfully identified a graph translator' between the statistical interactions in belief propagation and parameters of the corresponding trained network.
arXiv Detail & Related papers (2022-02-22T15:58:16Z) - The interplay between ranking and communities in networks [0.0]
We present a generative model based on an interplay between community and hierarchical structures.
It assumes that each node has a preference in the interaction mechanism and nodes with the same preference are more likely to interact.
We demonstrate our method on synthetic and real-world data and compare performance with two standard approaches for community detection and ranking extraction.
arXiv Detail & Related papers (2021-12-23T16:10:28Z) - Layer-stacked Attention for Heterogeneous Network Embedding [0.0]
Layer-stacked ATTention Embedding (LATTE) is an architecture that automatically decomposes higher-order meta relations at each layer.
LATTE offers a more interpretable aggregation scheme for nodes of different types at different neighborhood ranges.
In both transductive and inductive node classification tasks, LATTE can achieve state-of-the-art performance compared to existing approaches.
arXiv Detail & Related papers (2020-09-17T05:13:41Z) - Dual-constrained Deep Semi-Supervised Coupled Factorization Network with
Enriched Prior [80.5637175255349]
We propose a new enriched prior based Dual-constrained Deep Semi-Supervised Coupled Factorization Network, called DS2CF-Net.
To ex-tract hidden deep features, DS2CF-Net is modeled as a deep-structure and geometrical structure-constrained neural network.
Our network can obtain state-of-the-art performance for representation learning and clustering.
arXiv Detail & Related papers (2020-09-08T13:10:21Z) - Relation-Guided Representation Learning [53.60351496449232]
We propose a new representation learning method that explicitly models and leverages sample relations.
Our framework well preserves the relations between samples.
By seeking to embed samples into subspace, we show that our method can address the large-scale and out-of-sample problem.
arXiv Detail & Related papers (2020-07-11T10:57:45Z) - Cascaded Human-Object Interaction Recognition [175.60439054047043]
We introduce a cascade architecture for a multi-stage, coarse-to-fine HOI understanding.
At each stage, an instance localization network progressively refines HOI proposals and feeds them into an interaction recognition network.
With our carefully-designed human-centric relation features, these two modules work collaboratively towards effective interaction understanding.
arXiv Detail & Related papers (2020-03-09T17:05:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.