Neural Collaborative Reasoning
- URL: http://arxiv.org/abs/2005.08129v5
- Date: Mon, 3 May 2021 02:06:05 GMT
- Title: Neural Collaborative Reasoning
- Authors: Hanxiong Chen, Shaoyun Shi, Yunqi Li, Yongfeng Zhang
- Abstract summary: We propose Collaborative Filtering (CF) to Collaborative Reasoning (CR)
CR means that each user knows part of the reasoning space, and they collaborate for reasoning in the space to estimate preferences for each other.
We integrate the power of representation learning and logical reasoning, where representations capture similarity patterns in data from perceptual perspectives.
- Score: 31.03627817834551
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing Collaborative Filtering (CF) methods are mostly designed based on
the idea of matching, i.e., by learning user and item embeddings from data
using shallow or deep models, they try to capture the associative relevance
patterns in data, so that a user embedding can be matched with relevant item
embeddings using designed or learned similarity functions. However, as a
cognition rather than a perception intelligent task, recommendation requires
not only the ability of pattern recognition and matching from data, but also
the ability of cognitive reasoning in data. In this paper, we propose to
advance Collaborative Filtering (CF) to Collaborative Reasoning (CR), which
means that each user knows part of the reasoning space, and they collaborate
for reasoning in the space to estimate preferences for each other. Technically,
we propose a Neural Collaborative Reasoning (NCR) framework to bridge learning
and reasoning. Specifically, we integrate the power of representation learning
and logical reasoning, where representations capture similarity patterns in
data from perceptual perspectives, and logic facilitates cognitive reasoning
for informed decision making. An important challenge, however, is to bridge
differentiable neural networks and symbolic reasoning in a shared architecture
for optimization and inference. To solve the problem, we propose a modularized
reasoning architecture, which learns logical operations such as AND ($\wedge$),
OR ($\vee$) and NOT ($\neg$) as neural modules for implication reasoning
($\rightarrow$). In this way, logical expressions can be equivalently organized
as neural networks, so that logical reasoning and prediction can be conducted
in a continuous space. Experiments on real-world datasets verified the
advantages of our framework compared with both shallow, deep and reasoning
models.
Related papers
- Improving Complex Reasoning over Knowledge Graph with Logic-Aware Curriculum Tuning [89.89857766491475]
We propose a complex reasoning schema over KG upon large language models (LLMs)
We augment the arbitrary first-order logical queries via binary tree decomposition to stimulate the reasoning capability of LLMs.
Experiments across widely used datasets demonstrate that LACT has substantial improvements(brings an average +5.5% MRR score) over advanced methods.
arXiv Detail & Related papers (2024-05-02T18:12:08Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - Neuro-Symbolic Recommendation Model based on Logic Query [16.809190067920387]
We propose a neuro-symbolic recommendation model, which transforms the user history interactions into a logic expression.
The logic expressions are then computed based on the modular logic operations of the neural network.
Experiments on three well-known datasets verified that our method performs better compared to state of the art shallow, deep, session, and reasoning models.
arXiv Detail & Related papers (2023-09-14T10:54:48Z) - Modeling Hierarchical Reasoning Chains by Linking Discourse Units and
Key Phrases for Reading Comprehension [80.99865844249106]
We propose a holistic graph network (HGN) which deals with context at both discourse level and word level, as the basis for logical reasoning.
Specifically, node-level and type-level relations, which can be interpreted as bridges in the reasoning process, are modeled by a hierarchical interaction mechanism.
arXiv Detail & Related papers (2023-06-21T07:34:27Z) - LogiGAN: Learning Logical Reasoning via Adversarial Pre-training [58.11043285534766]
We present LogiGAN, an unsupervised adversarial pre-training framework for improving logical reasoning abilities of language models.
Inspired by the facilitation effect of reflective thinking in human learning, we simulate the learning-thinking process with an adversarial Generator-Verifier architecture.
Both base and large size language models pre-trained with LogiGAN demonstrate obvious performance improvement on 12 datasets.
arXiv Detail & Related papers (2022-05-18T08:46:49Z) - Neural Logic Reasoning [47.622957656745356]
We propose Logic-Integrated Neural Network (LINN) to integrate the power of deep learning and logic reasoning.
LINN learns basic logical operations such as AND, OR, NOT as neural modules, and conducts propositional logical reasoning through the network for inference.
Experiments show that LINN significantly outperforms state-of-the-art recommendation models in Top-K recommendation.
arXiv Detail & Related papers (2020-08-20T14:53:23Z) - Logical Neural Networks [51.46602187496816]
We propose a novel framework seamlessly providing key properties of both neural nets (learning) and symbolic logic (knowledge and reasoning)
Every neuron has a meaning as a component of a formula in a weighted real-valued logic, yielding a highly intepretable disentangled representation.
Inference is omni rather than focused on predefined target variables, and corresponds to logical reasoning.
arXiv Detail & Related papers (2020-06-23T16:55:45Z) - Relational Neural Machines [19.569025323453257]
This paper presents a novel framework allowing jointly train the parameters of the learners and of a First-Order Logic based reasoner.
A Neural Machine is able recover both classical learning results in case of pure sub-symbolic learning, and Markov Logic Networks.
Proper algorithmic solutions are devised to make learning and inference tractable in large-scale problems.
arXiv Detail & Related papers (2020-02-06T10:53:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.