Rule Learning for Knowledge Graph Reasoning under Agnostic Distribution Shift
- URL: http://arxiv.org/abs/2507.05110v3
- Date: Thu, 10 Jul 2025 16:55:05 GMT
- Title: Rule Learning for Knowledge Graph Reasoning under Agnostic Distribution Shift
- Authors: Shixuan Liu, Yue He, Yunfei Wang, Hao Zou, Haoxiang Cheng, Wenjing Yang, Peng Cui, Zhong Liu,
- Abstract summary: Rule learning suffers from a critical weakness-its dependence on the I.I.D. assumption.<n>StableRule is an end-to-end framework that combines feature decorrelation with rule learning network.
- Score: 24.842476787693055
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Logical rule learning, a prominent category of knowledge graph (KG) reasoning methods, constitutes a critical research area aimed at learning explicit rules from observed facts to infer missing knowledge. However, like all KG reasoning methods, rule learning suffers from a critical weakness-its dependence on the I.I.D. assumption. This assumption can easily be violated due to selection bias during training or agnostic distribution shifts during testing (e.g., as in query shift scenarios), ultimately undermining model performance and reliability. To enable robust KG reasoning in wild environments, this study investigates logical rule learning in the presence of agnostic test-time distribution shifts. We formally define this challenge as out-of-distribution (OOD) KG reasoning-a previously underexplored problem, and propose the Stable Rule Learning (StableRule) framework as a solution. StableRule is an end-to-end framework that combines feature decorrelation with rule learning network, to enhance OOD generalization in KG reasoning. By leveraging feature decorrelation, StableRule mitigates the adverse effects of covariate shifts arising in OOD scenarios, improving the robustness of the rule learning network. Extensive experiments on seven benchmark KGs demonstrate the framework's superior effectiveness and stability across diverse heterogeneous environments, highlighting its practical significance for real-world applications.
Related papers
- Improve Rule Retrieval and Reasoning with Self-Induction and Relevance ReEstimate [11.890202757543307]
This paper systematically addresses the challenges of rule retrieval, a crucial yet underexplored area.<n> Vanilla retrieval methods using sparse or dense retrievers to directly search for relevant rules often suffer from low accuracy.<n>We propose Self-Induction Augmented Retrieval (SIAR), a novel approach that utilizes Large Language Models (LLMs) to induce potential inferential rules.
arXiv Detail & Related papers (2025-05-16T05:22:42Z) - Rethinking Robustness in Machine Learning: A Posterior Agreement Approach [45.284633306624634]
Posterior Agreement (PA) theory of model validation provides a principled framework for robustness evaluation.<n>We show that the PA metric provides a sensible and consistent analysis of the vulnerabilities in learning algorithms, even in the presence of few observations.
arXiv Detail & Related papers (2025-03-20T16:03:39Z) - Enhancing Robust Fairness via Confusional Spectral Regularization [6.041034366572273]
We derive a robust generalization bound for the worst-class robust error within the PAC-Bayesian framework.<n>We propose a novel regularization technique to improve worst-class robust accuracy and enhance robust fairness.
arXiv Detail & Related papers (2025-01-22T23:32:19Z) - Can LLMs Reason with Rules? Logic Scaffolding for Stress-Testing and Improving LLMs [87.34281749422756]
Large language models (LLMs) have achieved impressive human-like performance across various reasoning tasks.
However, their mastery of underlying inferential rules still falls short of human capabilities.
We propose a logic scaffolding inferential rule generation framework, to construct an inferential rule base, ULogic.
arXiv Detail & Related papers (2024-02-18T03:38:51Z) - Selective Learning: Towards Robust Calibration with Dynamic Regularization [79.92633587914659]
Miscalibration in deep learning refers to there is a discrepancy between the predicted confidence and performance.
We introduce Dynamic Regularization (DReg) which aims to learn what should be learned during training thereby circumventing the confidence adjusting trade-off.
arXiv Detail & Related papers (2024-02-13T11:25:20Z) - ChatRule: Mining Logical Rules with Large Language Models for Knowledge
Graph Reasoning [107.61997887260056]
We propose a novel framework, ChatRule, unleashing the power of large language models for mining logical rules over knowledge graphs.
Specifically, the framework is initiated with an LLM-based rule generator, leveraging both the semantic and structural information of KGs.
To refine the generated rules, a rule ranking module estimates the rule quality by incorporating facts from existing KGs.
arXiv Detail & Related papers (2023-09-04T11:38:02Z) - On the Aggregation of Rules for Knowledge Graph Completion [9.628032156001069]
Rule learning approaches for knowledge graph completion are efficient, interpretable and competitive to purely neural models.
We show that existing aggregation approaches can be expressed as marginal inference operations over the predicting rules.
We propose an efficient and overlooked baseline which combines the previous strategies and is competitive to computationally more expensive approaches.
arXiv Detail & Related papers (2023-09-01T07:32:11Z) - When Does Confidence-Based Cascade Deferral Suffice? [69.28314307469381]
Cascades are a classical strategy to enable inference cost to vary adaptively across samples.
A deferral rule determines whether to invoke the next classifier in the sequence, or to terminate prediction.
Despite being oblivious to the structure of the cascade, confidence-based deferral often works remarkably well in practice.
arXiv Detail & Related papers (2023-07-06T04:13:57Z) - Joint Learning of Label and Environment Causal Independence for Graph
Out-of-Distribution Generalization [60.4169201192582]
We propose to incorporate label and environment causal independence (LECI) to fully make use of label and environment information.
LECI significantly outperforms prior methods on both synthetic and real-world datasets.
arXiv Detail & Related papers (2023-06-01T19:33:30Z) - Uncertainty Estimation by Fisher Information-based Evidential Deep
Learning [61.94125052118442]
Uncertainty estimation is a key factor that makes deep learning reliable in practical applications.
We propose a novel method, Fisher Information-based Evidential Deep Learning ($mathcalI$-EDL)
In particular, we introduce Fisher Information Matrix (FIM) to measure the informativeness of evidence carried by each sample, according to which we can dynamically reweight the objective loss terms to make the network more focused on the representation learning of uncertain classes.
arXiv Detail & Related papers (2023-03-03T16:12:59Z) - RulE: Knowledge Graph Reasoning with Rule Embedding [69.31451649090661]
We propose a principled framework called textbfRulE (stands for Rule Embedding) to leverage logical rules to enhance KG reasoning.
RulE learns rule embeddings from existing triplets and first-order rules by jointly representing textbfentities, textbfrelations and textbflogical rules in a unified embedding space.
Results on multiple benchmarks reveal that our model outperforms the majority of existing embedding-based and rule-based approaches.
arXiv Detail & Related papers (2022-10-24T06:47:13Z) - ESC-Rules: Explainable, Semantically Constrained Rule Sets [11.160515561004619]
We describe a novel approach to explainable prediction of a continuous variable based on learning fuzzy weighted rules.
Our model trains a set of weighted rules to maximise prediction accuracy and minimise an ontology-based'semantic loss' function.
This system fuses quantitative sub-symbolic learning with symbolic learning and constraints based on domain knowledge.
arXiv Detail & Related papers (2022-08-26T09:29:30Z) - Robust Reinforcement Learning with Distributional Risk-averse
formulation [1.2891210250935146]
We approximate the Robust Reinforcement Learning constrained with a $Phi$-divergence using an approximate Risk-Averse formulation.
We show that the classical Reinforcement Learning formulation can be robustified using standard deviation penalization of the objective.
arXiv Detail & Related papers (2022-06-14T13:33:58Z) - Rule Generation for Classification: Scalability, Interpretability, and Fairness [0.0]
We propose a new rule-based optimization method for classification with constraints.<n>We address interpretability and fairness by assigning cost coefficients to the rules and introducing additional constraints.<n>The proposed method exhibits a good compromise between local interpretability and fairness on the one side, and accuracy on the other side.
arXiv Detail & Related papers (2021-04-21T20:31:28Z) - Disentangling Observed Causal Effects from Latent Confounders using
Method of Moments [67.27068846108047]
We provide guarantees on identifiability and learnability under mild assumptions.
We develop efficient algorithms based on coupled tensor decomposition with linear constraints to obtain scalable and guaranteed solutions.
arXiv Detail & Related papers (2021-01-17T07:48:45Z) - Learning Causal Semantic Representation for Out-of-Distribution
Prediction [125.38836464226092]
We propose a Causal Semantic Generative model (CSG) based on a causal reasoning so that the two factors are modeled separately.
We show that CSG can identify the semantic factor by fitting training data, and this semantic-identification guarantees the boundedness of OOD generalization error.
arXiv Detail & Related papers (2020-11-03T13:16:05Z) - A Hybrid Model for Learning Embeddings and Logical Rules Simultaneously
from Knowledge Graphs [20.438750956142638]
We develop a hybrid model that learns both high-quality rules and embeddings simultaneously.
Our method uses a cross feedback paradigm wherein, an embedding model is used to guide the search of a rule mining system to mine rules and infer new facts.
arXiv Detail & Related papers (2020-09-22T20:29:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.