How Much and When Do We Need Higher-order Information in Hypergraphs? A
Case Study on Hyperedge Prediction
- URL: http://arxiv.org/abs/2001.11181v3
- Date: Wed, 13 May 2020 13:17:54 GMT
- Title: How Much and When Do We Need Higher-order Information in Hypergraphs? A
Case Study on Hyperedge Prediction
- Authors: Se-eun Yoon, Hyungseok Song, Kijung Shin, and Yung Yi
- Abstract summary: We propose a method of incrementally representing group interactions using a notion of n-projected graph whose accumulation contains information on up to n-way interactions.
As a downstream task, we consider hyperedge prediction, an extension of link prediction, which is a canonical task for evaluating graph models.
- Score: 15.912619060150861
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Hypergraphs provide a natural way of representing group relations, whose
complexity motivates an extensive array of prior work to adopt some form of
abstraction and simplification of higher-order interactions. However, the
following question has yet to be addressed: How much abstraction of group
interactions is sufficient in solving a hypergraph task, and how different such
results become across datasets? This question, if properly answered, provides a
useful engineering guideline on how to trade off between complexity and
accuracy of solving a downstream task. To this end, we propose a method of
incrementally representing group interactions using a notion of n-projected
graph whose accumulation contains information on up to n-way interactions, and
quantify the accuracy of solving a task as n grows for various datasets. As a
downstream task, we consider hyperedge prediction, an extension of link
prediction, which is a canonical task for evaluating graph models. Through
experiments on 15 real-world datasets, we draw the following messages: (a)
Diminishing returns: small n is enough to achieve accuracy comparable with
near-perfect approximations, (b) Troubleshooter: as the task becomes more
challenging, larger n brings more benefit, and (c) Irreducibility: datasets
whose pairwise interactions do not tell much about higher-order interactions
lose much accuracy when reduced to pairwise abstractions.
Related papers
- SPHINX: Structural Prediction using Hypergraph Inference Network [19.853413818941608]
We introduce Structural Prediction using Hypergraph Inference Network (SPHINX), a model that learns to infer a latent hypergraph structure in an unsupervised way.
We show that the recent advancement in k-subset sampling represents a suitable tool for producing discrete hypergraph structures.
The resulting model can generate the higher-order structure necessary for any modern hypergraph neural network.
arXiv Detail & Related papers (2024-10-04T07:49:57Z) - Interaction Event Forecasting in Multi-Relational Recursive HyperGraphs: A Temporal Point Process Approach [12.142292322071299]
This work addresses the problem of forecasting higher-order interaction events in multi-relational recursive hypergraphs.
The proposed model, textitRelational Recursive Hyperedge Temporal Point Process (RRHyperTPP), uses an encoder that learns a dynamic node representation based on the historical interaction patterns.
We have experimentally shown that our models perform better than previous state-of-the-art methods for interaction forecasting.
arXiv Detail & Related papers (2024-04-27T15:46:54Z) - Harnessing the Power of Large Language Model for Uncertainty Aware Graph Processing [24.685942503019948]
We introduce a novel approach that harnesses the power of a large language model (LLM) to provide a confidence score on the generated answer.
We experiment with our approach on two graph processing tasks: few-shot knowledge graph completion and graph classification.
Our confidence measure achieves an AUC of 0.8 or higher on seven out of the ten datasets in predicting the correctness of the answer generated by LLM.
arXiv Detail & Related papers (2024-03-31T07:38:39Z) - Boosting Multitask Learning on Graphs through Higher-Order Task Affinities [17.70434437597516]
Predicting node labels on a given graph is a widely studied problem with many applications, including community detection and molecular graph prediction.
This paper considers predicting multiple node labeling functions on graphs simultaneously and revisits this problem from a multitask learning perspective.
We develop an algorithm to cluster tasks into groups based on a higher-order task affinity measure.
arXiv Detail & Related papers (2023-06-24T15:53:38Z) - Personalized Decentralized Multi-Task Learning Over Dynamic
Communication Graphs [59.96266198512243]
We propose a decentralized and federated learning algorithm for tasks that are positively and negatively correlated.
Our algorithm uses gradients to calculate the correlations among tasks automatically, and dynamically adjusts the communication graph to connect mutually beneficial tasks and isolate those that may negatively impact each other.
We conduct experiments on a synthetic Gaussian dataset and a large-scale celebrity attributes (CelebA) dataset.
arXiv Detail & Related papers (2022-12-21T18:58:24Z) - A Survey of Learning on Small Data: Generalization, Optimization, and
Challenge [101.27154181792567]
Learning on small data that approximates the generalization ability of big data is one of the ultimate purposes of AI.
This survey follows the active sampling theory under a PAC framework to analyze the generalization error and label complexity of learning on small data.
Multiple data applications that may benefit from efficient small data representation are surveyed.
arXiv Detail & Related papers (2022-07-29T02:34:19Z) - Arch-Graph: Acyclic Architecture Relation Predictor for
Task-Transferable Neural Architecture Search [96.31315520244605]
Arch-Graph is a transferable NAS method that predicts task-specific optimal architectures.
We show Arch-Graph's transferability and high sample efficiency across numerous tasks.
It is able to find top 0.16% and 0.29% architectures on average on two search spaces under the budget of only 50 models.
arXiv Detail & Related papers (2022-04-12T16:46:06Z) - Deconfounded Training for Graph Neural Networks [98.06386851685645]
We present a new paradigm of decon training (DTP) that better mitigates the confounding effect and latches on the critical information.
Specifically, we adopt the attention modules to disentangle the critical subgraph and trivial subgraph.
It allows GNNs to capture a more reliable subgraph whose relation with the label is robust across different distributions.
arXiv Detail & Related papers (2021-12-30T15:22:35Z) - When is Memorization of Irrelevant Training Data Necessary for
High-Accuracy Learning? [53.523017945443115]
We describe natural prediction problems in which every sufficiently accurate training algorithm must encode, in the prediction model, essentially all the information about a large subset of its training examples.
Our results do not depend on the training algorithm or the class of models used for learning.
arXiv Detail & Related papers (2020-12-11T15:25:14Z) - Complex Query Answering with Neural Link Predictors [13.872400132315988]
We propose a framework for efficiently answering complex queries on incomplete Knowledge Graphs.
We translate each query into an end-to-end differentiable objective, where the truth value of each atom is computed by a pre-trained neural predictor.
In our experiments, the proposed approach produces more accurate results than state-of-the-art methods.
arXiv Detail & Related papers (2020-11-06T16:20:49Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.