Neuro-Symbolic Recommendation Model based on Logic Query
- URL: http://arxiv.org/abs/2309.07594v1
- Date: Thu, 14 Sep 2023 10:54:48 GMT
- Title: Neuro-Symbolic Recommendation Model based on Logic Query
- Authors: Maonian Wu, Bang Chen, Shaojun Zhu, Bo Zheng, Wei Peng, Mingyi Zhang
- Abstract summary: We propose a neuro-symbolic recommendation model, which transforms the user history interactions into a logic expression.
The logic expressions are then computed based on the modular logic operations of the neural network.
Experiments on three well-known datasets verified that our method performs better compared to state of the art shallow, deep, session, and reasoning models.
- Score: 16.809190067920387
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A recommendation system assists users in finding items that are relevant to
them. Existing recommendation models are primarily based on predicting
relationships between users and items and use complex matching models or
incorporate extensive external information to capture association patterns in
data. However, recommendation is not only a problem of inductive statistics
using data; it is also a cognitive task of reasoning decisions based on
knowledge extracted from information. Hence, a logic system could naturally be
incorporated for the reasoning in a recommendation task. However, although
hard-rule approaches based on logic systems can provide powerful reasoning
ability, they struggle to cope with inconsistent and incomplete knowledge in
real-world tasks, especially for complex tasks such as recommendation.
Therefore, in this paper, we propose a neuro-symbolic recommendation model,
which transforms the user history interactions into a logic expression and then
transforms the recommendation prediction into a query task based on this logic
expression. The logic expressions are then computed based on the modular logic
operations of the neural network. We also construct an implicit logic encoder
to reasonably reduce the complexity of the logic computation. Finally, a user's
interest items can be queried in the vector space based on the computation
results. Experiments on three well-known datasets verified that our method
performs better compared to state of the art shallow, deep, session, and
reasoning models.
Related papers
- Efficiently Learning Probabilistic Logical Models by Cheaply Ranking Mined Rules [9.303501974597548]
We introduce precision and recall for logical rules and define their composition as rule utility.
We introduce SPECTRUM, a scalable framework for learning logical models from relational data.
arXiv Detail & Related papers (2024-09-24T16:54:12Z) - Improving Complex Reasoning over Knowledge Graph with Logic-Aware Curriculum Tuning [89.89857766491475]
We propose a complex reasoning schema over KG upon large language models (LLMs)
We augment the arbitrary first-order logical queries via binary tree decomposition to stimulate the reasoning capability of LLMs.
Experiments across widely used datasets demonstrate that LACT has substantial improvements(brings an average +5.5% MRR score) over advanced methods.
arXiv Detail & Related papers (2024-05-02T18:12:08Z) - IID Relaxation by Logical Expressivity: A Research Agenda for Fitting Logics to Neurosymbolic Requirements [50.57072342894621]
We discuss the benefits of exploiting known data dependencies and distribution constraints for Neurosymbolic use cases.
This opens a new research agenda with general questions about Neurosymbolic background knowledge and the expressivity required of its logic.
arXiv Detail & Related papers (2024-04-30T12:09:53Z) - Type-based Neural Link Prediction Adapter for Complex Query Answering [2.1098688291287475]
We propose TypE-based Neural Link Prediction Adapter (TENLPA), a novel model that constructs type-based entity-relation graphs.
In order to effectively combine type information with complex logical queries, an adaptive learning mechanism is introduced.
Experiments on 3 standard datasets show that TENLPA model achieves state-of-the-art performance on complex query answering.
arXiv Detail & Related papers (2024-01-29T10:54:28Z) - Neural-Symbolic Recommendation with Graph-Enhanced Information [7.841447116972524]
We build a neuro-symbolic recommendation model with both global implicit reasoning ability and local explicit logic reasoning ability.
We transform user behavior into propositional logic expressions to achieve recommendations from the perspective of cognitive reasoning.
arXiv Detail & Related papers (2023-07-11T06:29:31Z) - Query Structure Modeling for Inductive Logical Reasoning Over Knowledge
Graphs [67.043747188954]
We propose a structure-modeled textual encoding framework for inductive logical reasoning over KGs.
It encodes linearized query structures and entities using pre-trained language models to find answers.
We conduct experiments on two inductive logical reasoning datasets and three transductive datasets.
arXiv Detail & Related papers (2023-05-23T01:25:29Z) - Rethinking Complex Queries on Knowledge Graphs with Neural Link Predictors [58.340159346749964]
We propose a new neural-symbolic method to support end-to-end learning using complex queries with provable reasoning capability.
We develop a new dataset containing ten new types of queries with features that have never been considered.
Our method outperforms previous methods significantly in the new dataset and also surpasses previous methods in the existing dataset at the same time.
arXiv Detail & Related papers (2023-04-14T11:35:35Z) - Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks [65.23508422635862]
We propose learning rules with the recently proposed logical neural networks (LNN)
Compared to others, LNNs offer strong connection to classical Boolean logic.
Our experiments on standard benchmarking tasks confirm that LNN rules are highly interpretable.
arXiv Detail & Related papers (2021-12-06T19:38:30Z) - Neural Logic Reasoning [47.622957656745356]
We propose Logic-Integrated Neural Network (LINN) to integrate the power of deep learning and logic reasoning.
LINN learns basic logical operations such as AND, OR, NOT as neural modules, and conducts propositional logical reasoning through the network for inference.
Experiments show that LINN significantly outperforms state-of-the-art recommendation models in Top-K recommendation.
arXiv Detail & Related papers (2020-08-20T14:53:23Z) - Neural Collaborative Reasoning [31.03627817834551]
We propose Collaborative Filtering (CF) to Collaborative Reasoning (CR)
CR means that each user knows part of the reasoning space, and they collaborate for reasoning in the space to estimate preferences for each other.
We integrate the power of representation learning and logical reasoning, where representations capture similarity patterns in data from perceptual perspectives.
arXiv Detail & Related papers (2020-05-16T23:29:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.