Knowledge-Enhanced Personalized Review Generation with Capsule Graph
Neural Network
- URL: http://arxiv.org/abs/2010.01480v1
- Date: Sun, 4 Oct 2020 03:54:40 GMT
- Title: Knowledge-Enhanced Personalized Review Generation with Capsule Graph
Neural Network
- Authors: Junyi Li, Siqing Li, Wayne Xin Zhao, Gaole He, Zhicheng Wei, Nicholas
Jing Yuan and Ji-Rong Wen
- Abstract summary: We propose a knowledge-enhanced PRG model based on capsule graph neural network(Caps-GNN)
Our generation process contains two major steps, namely aspect sequence generation and sentence generation.
The incorporated knowledge graph is able to enhance user preference at both aspect and word levels.
- Score: 81.81662828017517
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Personalized review generation (PRG) aims to automatically produce review
text reflecting user preference, which is a challenging natural language
generation task. Most of previous studies do not explicitly model factual
description of products, tending to generate uninformative content. Moreover,
they mainly focus on word-level generation, but cannot accurately reflect more
abstractive user preference in multiple aspects. To address the above issues,
we propose a novel knowledge-enhanced PRG model based on capsule graph neural
network~(Caps-GNN). We first construct a heterogeneous knowledge graph (HKG)
for utilizing rich item attributes. We adopt Caps-GNN to learn graph capsules
for encoding underlying characteristics from the HKG. Our generation process
contains two major steps, namely aspect sequence generation and sentence
generation. First, based on graph capsules, we adaptively learn aspect capsules
for inferring the aspect sequence. Then, conditioned on the inferred aspect
label, we design a graph-based copy mechanism to generate sentences by
incorporating related entities or words from HKG. To our knowledge, we are the
first to utilize knowledge graph for the PRG task. The incorporated KG
information is able to enhance user preference at both aspect and word levels.
Extensive experiments on three real-world datasets have demonstrated the
effectiveness of our model on the PRG task.
Related papers
- A Pure Transformer Pretraining Framework on Text-attributed Graphs [50.833130854272774]
We introduce a feature-centric pretraining perspective by treating graph structure as a prior.
Our framework, Graph Sequence Pretraining with Transformer (GSPT), samples node contexts through random walks.
GSPT can be easily adapted to both node classification and link prediction, demonstrating promising empirical success on various datasets.
arXiv Detail & Related papers (2024-06-19T22:30:08Z) - Narrating Causal Graphs with Large Language Models [1.437446768735628]
This work explores the capability of large pretrained language models to generate text from causal graphs.
The causal reasoning encoded in these graphs can support applications as diverse as healthcare or marketing.
Results suggest users of generative AI can deploy future applications faster since similar performances are obtained when training a model with only a few examples.
arXiv Detail & Related papers (2024-03-11T19:19:59Z) - G-Retriever: Retrieval-Augmented Generation for Textual Graph Understanding and Question Answering [61.93058781222079]
We develop a flexible question-answering framework targeting real-world textual graphs.
We introduce the first retrieval-augmented generation (RAG) approach for general textual graphs.
G-Retriever performs RAG over a graph by formulating this task as a Prize-Collecting Steiner Tree optimization problem.
arXiv Detail & Related papers (2024-02-12T13:13:04Z) - SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - Harnessing Explanations: LLM-to-LM Interpreter for Enhanced
Text-Attributed Graph Representation Learning [51.90524745663737]
A key innovation is our use of explanations as features, which can be used to boost GNN performance on downstream tasks.
Our method achieves state-of-the-art results on well-established TAG datasets.
Our method significantly speeds up training, achieving a 2.88 times improvement over the closest baseline on ogbn-arxiv.
arXiv Detail & Related papers (2023-05-31T03:18:03Z) - A Survey of Pretraining on Graphs: Taxonomy, Methods, and Applications [38.57023440288189]
We provide the first comprehensive survey for Pretrained Graph Models (PGMs)
We firstly present the limitations of graph representation learning and thus introduce the motivation for graph pre-training.
Next, we present the applications of PGMs in social recommendation and drug discovery.
arXiv Detail & Related papers (2022-02-16T07:00:52Z) - Toward Subgraph-Guided Knowledge Graph Question Generation with Graph
Neural Networks [53.58077686470096]
Knowledge graph (KG) question generation (QG) aims to generate natural language questions from KGs and target answers.
In this work, we focus on a more realistic setting where we aim to generate questions from a KG subgraph and target answers.
arXiv Detail & Related papers (2020-04-13T15:43:22Z) - Generative Adversarial Zero-shot Learning via Knowledge Graphs [32.42721467499858]
We introduce a new generative ZSL method named KG-GAN by incorporating rich semantics in a knowledge graph (KG) into GANs.
Specifically, we build upon Graph Neural Networks and encode KG from two views: class view and attribute view.
With well-learned semantic embeddings for each node (representing a visual category), we leverage GANs to synthesize compelling visual features for unseen classes.
arXiv Detail & Related papers (2020-04-07T03:55:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.