PKGM: A Pre-trained Knowledge Graph Model for E-commerce Application
- URL: http://arxiv.org/abs/2203.00964v1
- Date: Wed, 2 Mar 2022 09:17:20 GMT
- Title: PKGM: A Pre-trained Knowledge Graph Model for E-commerce Application
- Authors: Wen Zhang, Chi-Man Wong, Ganqinag Ye, Bo Wen, Hongting Zhou, Wei
Zhang, Huajun Chen
- Abstract summary: In online shopping platform Taobao, we built a billion-scale e-commerce product knowledge graph.
It organizes data uniformly and provides item knowledge services for various tasks such as item recommendation.
We propose a Pre-trained Knowledge Graph Model (PKGM) for the billion-scale product knowledge graph.
- Score: 22.3129874858367
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, knowledge graphs have been widely applied as a uniform way
to organize data and have enhanced many tasks requiring knowledge. In online
shopping platform Taobao, we built a billion-scale e-commerce product knowledge
graph. It organizes data uniformly and provides item knowledge services for
various tasks such as item recommendation. Usually, such knowledge services are
provided through triple data, while this implementation includes (1) tedious
data selection works on product knowledge graph and (2) task model designing
works to infuse those triples knowledge. More importantly, product knowledge
graph is far from complete, resulting error propagation to knowledge enhanced
tasks. To avoid these problems, we propose a Pre-trained Knowledge Graph Model
(PKGM) for the billion-scale product knowledge graph. On the one hand, it could
provide item knowledge services in a uniform way with service vectors for
embedding-based and item-knowledge-related task models without accessing triple
data. On the other hand, it's service is provided based on implicitly completed
product knowledge graph, overcoming the common the incomplete issue. We also
propose two general ways to integrate the service vectors from PKGM into
downstream task models. We test PKGM in five knowledge-related tasks, item
classification, item resolution, item recommendation, scene detection and
sequential recommendation. Experimental results show that PKGM introduces
significant performance gains on these tasks, illustrating the useful of
service vectors from PKGM.
Related papers
- Hierarchical Knowledge Graph Construction from Images for Scalable E-Commerce [17.97354500453661]
We propose a novel method for constructing structured product knowledge graphs from raw product images.
The method cooperatively leverages recent advances in the vision-language model (VLM) and large language model (LLM)
arXiv Detail & Related papers (2024-10-28T17:34:05Z) - GraphAdapter: Tuning Vision-Language Models With Dual Knowledge Graph [63.81641578763094]
adapter-style efficient transfer learning (ETL) has shown excellent performance in the tuning of vision-language models (VLMs)
We propose an effective adapter-style tuning strategy, dubbed GraphAdapter, which performs the textual adapter by explicitly modeling the dual-modality structure knowledge.
In particular, the dual knowledge graph is established with two sub-graphs, i.e., a textual knowledge sub-graph, and a visual knowledge sub-graph, where the nodes and edges represent the semantics/classes and their correlations in two modalities, respectively.
arXiv Detail & Related papers (2023-09-24T12:56:40Z) - KGrEaT: A Framework to Evaluate Knowledge Graphs via Downstream Tasks [1.8722948221596285]
KGrEaT is a framework to estimate the quality of knowledge graphs via actual downstream tasks like classification, clustering, or recommendation.
The framework takes a knowledge graph as input, automatically maps it to the datasets to be evaluated on, and computes performance metrics for the defined tasks.
arXiv Detail & Related papers (2023-08-21T07:43:10Z) - Towards Loosely-Coupling Knowledge Graph Embeddings and Ontology-based
Reasoning [15.703028753526022]
We propose to loosely-couple the data-driven power of knowledge graph embeddings with domain-specific reasoning stemming from experts or entailment regimes (e.g., OWL2)
Our initial results show that we enhance the MRR accuracy of vanilla knowledge graph embeddings by up to 3x and outperform hybrid solutions that combine knowledge graph embeddings with rule mining and reasoning up to 3.5x MRR.
arXiv Detail & Related papers (2022-02-07T14:01:49Z) - A Survey on Visual Transfer Learning using Knowledge Graphs [0.8701566919381223]
This survey focuses on visual transfer learning approaches using knowledge graphs (KGs)
KGs can represent auxiliary knowledge either in an underlying graph-structured schema or in a vector-based knowledge graph embedding.
We provide a broad overview of knowledge graph embedding methods and describe several joint training objectives suitable to combine them with high dimensional visual embeddings.
arXiv Detail & Related papers (2022-01-27T20:19:55Z) - Conditional Attention Networks for Distilling Knowledge Graphs in
Recommendation [74.14009444678031]
We propose Knowledge-aware Conditional Attention Networks (KCAN) to incorporate knowledge graph into a recommender system.
We use a knowledge-aware attention propagation manner to obtain the node representation first, which captures the global semantic similarity on the user-item network and the knowledge graph.
Then, by applying a conditional attention aggregation on the subgraph, we refine the knowledge graph to obtain target-specific node representations.
arXiv Detail & Related papers (2021-11-03T09:40:43Z) - Billion-scale Pre-trained E-commerce Product Knowledge Graph Model [13.74839302948699]
Pre-trained Knowledge Graph Model (PKGM) for e-commerce product knowledge graph.
PKGM provides item knowledge services in a uniform way for embedding-based models without accessing triple data in the knowledge graph.
We test PKGM in three knowledge-related tasks including item classification, same item identification, and recommendation.
arXiv Detail & Related papers (2021-05-02T04:28:22Z) - Reasoning over Vision and Language: Exploring the Benefits of
Supplemental Knowledge [59.87823082513752]
This paper investigates the injection of knowledge from general-purpose knowledge bases (KBs) into vision-and-language transformers.
We empirically study the relevance of various KBs to multiple tasks and benchmarks.
The technique is model-agnostic and can expand the applicability of any vision-and-language transformer with minimal computational overhead.
arXiv Detail & Related papers (2021-01-15T08:37:55Z) - Towards a Universal Continuous Knowledge Base [49.95342223987143]
We propose a method for building a continuous knowledge base that can store knowledge imported from multiple neural networks.
Experiments on text classification show promising results.
We import the knowledge from multiple models to the knowledge base, from which the fused knowledge is exported back to a single model.
arXiv Detail & Related papers (2020-12-25T12:27:44Z) - Pre-training Graph Transformer with Multimodal Side Information for
Recommendation [82.4194024706817]
We propose a pre-training strategy to learn item representations by considering both item side information and their relationships.
We develop a novel sampling algorithm named MCNSampling to select contextual neighbors for each item.
The proposed Pre-trained Multimodal Graph Transformer (PMGT) learns item representations with two objectives: 1) graph structure reconstruction, and 2) masked node feature reconstruction.
arXiv Detail & Related papers (2020-10-23T10:30:24Z) - ENT-DESC: Entity Description Generation by Exploring Knowledge Graph [53.03778194567752]
In practice, the input knowledge could be more than enough, since the output description may only cover the most significant knowledge.
We introduce a large-scale and challenging dataset to facilitate the study of such a practical scenario in KG-to-text.
We propose a multi-graph structure that is able to represent the original graph information more comprehensively.
arXiv Detail & Related papers (2020-04-30T14:16:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.