Reevaluation of Inductive Link Prediction
- URL: http://arxiv.org/abs/2409.20130v1
- Date: Mon, 30 Sep 2024 09:32:10 GMT
- Title: Reevaluation of Inductive Link Prediction
- Authors: Simon Ott, Christian Meilicke, Heiner Stuckenschmidt,
- Abstract summary: We show that the evaluation protocol currently used for inductive link prediction is heavily flawed.
Due to the limited size of the set of negatives, a simple rule-based baseline can achieve state-of-the-art results.
- Score: 9.955225436683959
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Within this paper, we show that the evaluation protocol currently used for inductive link prediction is heavily flawed as it relies on ranking the true entity in a small set of randomly sampled negative entities. Due to the limited size of the set of negatives, a simple rule-based baseline can achieve state-of-the-art results, which simply ranks entities higher based on the validity of their type. As a consequence of these insights, we reevaluate current approaches for inductive link prediction on several benchmarks using the link prediction protocol usually applied to the transductive setting. As some inductive methods suffer from scalability issues when evaluated in this setting, we propose and apply additionally an improved sampling protocol, which does not suffer from the problem mentioned above. The results of our evaluation differ drastically from the results reported in so far.
Related papers
- Rethinking the generalization of drug target affinity prediction algorithms via similarity aware evaluation [19.145735532822012]
We show that the canonical randomized split of a test set in conventional evaluation leaves the test set dominated by samples with high similarity to the training set.
We propose a framework of similarity aware evaluation in which a novel split methodology is proposed to adapt to any desired distribution.
Results demonstrate that the proposed split methodology can significantly better fit desired distributions and guide the development of models.
arXiv Detail & Related papers (2025-04-13T08:30:57Z) - Where is this coming from? Making groundedness count in the evaluation of Document VQA models [12.951716701565019]
We argue that common evaluation metrics do not account for the semantic and multimodal groundedness of a model's outputs.
We propose a new evaluation methodology that accounts for the groundedness of predictions.
Our proposed methodology is parameterized in such a way that users can configure the score according to their preferences.
arXiv Detail & Related papers (2025-03-24T20:14:46Z) - Conformal Generative Modeling with Improved Sample Efficiency through Sequential Greedy Filtering [55.15192437680943]
Generative models lack rigorous statistical guarantees for their outputs.
We propose a sequential conformal prediction method producing prediction sets that satisfy a rigorous statistical guarantee.
This guarantee states that with high probability, the prediction sets contain at least one admissible (or valid) example.
arXiv Detail & Related papers (2024-10-02T15:26:52Z) - Estimating Treatment Effects under Recommender Interference: A Structured Neural Networks Approach [13.208141830901845]
We show that the standard difference-in-means estimator can lead to biased estimates due to recommender interference.
We propose a "recommender choice model" that describes which item gets exposed from a pool containing both treated and control items.
We show that the proposed estimator yields results comparable to the benchmark, whereas the standard difference-in-means estimator can exhibit significant bias and even produce reversed signs.
arXiv Detail & Related papers (2024-06-20T14:53:26Z) - AMRFact: Enhancing Summarization Factuality Evaluation with AMR-Driven Negative Samples Generation [57.8363998797433]
We propose AMRFact, a framework that generates perturbed summaries using Abstract Meaning Representations (AMRs)
Our approach parses factually consistent summaries into AMR graphs and injects controlled factual inconsistencies to create negative examples, allowing for coherent factually inconsistent summaries to be generated with high error-type coverage.
arXiv Detail & Related papers (2023-11-16T02:56:29Z) - Evaluating Graph Neural Networks for Link Prediction: Current Pitfalls
and New Benchmarking [66.83273589348758]
Link prediction attempts to predict whether an unseen edge exists based on only a portion of edges of a graph.
A flurry of methods have been introduced in recent years that attempt to make use of graph neural networks (GNNs) for this task.
New and diverse datasets have also been created to better evaluate the effectiveness of these new models.
arXiv Detail & Related papers (2023-06-18T01:58:59Z) - Predictive change point detection for heterogeneous data [1.1720726814454114]
"Predict and Compare" is a change point detection framework assisted by a predictive machine learning model.
It outperforms online CPD routines in terms of false positive rate and out-of-control average run length.
The power of the method is demonstrated in a tribological case study.
arXiv Detail & Related papers (2023-05-11T07:59:18Z) - Near-optimal inference in adaptive linear regression [60.08422051718195]
Even simple methods like least squares can exhibit non-normal behavior when data is collected in an adaptive manner.
We propose a family of online debiasing estimators to correct these distributional anomalies in at least squares estimation.
We demonstrate the usefulness of our theory via applications to multi-armed bandit, autoregressive time series estimation, and active learning with exploration.
arXiv Detail & Related papers (2021-07-05T21:05:11Z) - Deconfounding Scores: Feature Representations for Causal Effect
Estimation with Weak Overlap [140.98628848491146]
We introduce deconfounding scores, which induce better overlap without biasing the target of estimation.
We show that deconfounding scores satisfy a zero-covariance condition that is identifiable in observed data.
In particular, we show that this technique could be an attractive alternative to standard regularizations.
arXiv Detail & Related papers (2021-04-12T18:50:11Z) - On Model Identification and Out-of-Sample Prediction of Principal
Component Regression: Applications to Synthetic Controls [20.96904429337912]
We analyze principal component regression (PCR) in a high-dimensional error-in-variables setting with fixed design.
We establish non-asymptotic out-of-sample prediction guarantees that improve upon the best known rates.
arXiv Detail & Related papers (2020-10-27T17:07:36Z) - Benchmarking Network Embedding Models for Link Prediction: Are We Making
Progress? [84.43405961569256]
We shed light on the state-of-the-art of network embedding methods for link prediction.
We show, using a consistent evaluation pipeline, that only thin progress has been made over the last years.
We argue that standardized evaluation tools can repair this situation and boost future progress in this field.
arXiv Detail & Related papers (2020-02-25T16:59:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.