Knowledge Graph-Augmented Korean Generative Commonsense Reasoning
- URL: http://arxiv.org/abs/2306.14470v1
- Date: Mon, 26 Jun 2023 07:23:47 GMT
- Title: Knowledge Graph-Augmented Korean Generative Commonsense Reasoning
- Authors: Dahyun Jung, Jaehyung Seo, Jaewook Lee, Chanjun Park, Heuiseok Lim
- Abstract summary: We propose a method to utilize the Korean knowledge graph data for text generation.
Our experimental result shows that the proposed method can enhance the efficiency of Korean commonsense inference.
- Score: 5.951529604050278
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative commonsense reasoning refers to the task of generating acceptable
and logical assumptions about everyday situations based on commonsense
understanding. By utilizing an existing dataset such as Korean CommonGen,
language generation models can learn commonsense reasoning specific to the
Korean language. However, language models often fail to consider the
relationships between concepts and the deep knowledge inherent to concepts. To
address these limitations, we propose a method to utilize the Korean knowledge
graph data for text generation. Our experimental result shows that the proposed
method can enhance the efficiency of Korean commonsense inference, thereby
underlining the significance of employing supplementary data.
Related papers
- Does Incomplete Syntax Influence Korean Language Model? Focusing on Word Order and Case Markers [7.275938266030414]
Syntactic elements, such as word order and case markers, are fundamental in natural language processing.
This study explores whether Korean language models can accurately capture this flexibility.
arXiv Detail & Related papers (2024-07-12T11:33:41Z) - HAE-RAE Bench: Evaluation of Korean Knowledge in Language Models [0.0]
We introduce the HAE-RAE Bench, a dataset curated to challenge models lacking Korean cultural and contextual depth.
The dataset encompasses six downstream tasks across four domains: vocabulary, history, general knowledge, and reading comprehension.
arXiv Detail & Related papers (2023-09-06T04:38:16Z) - Commonsense Knowledge Transfer for Pre-trained Language Models [83.01121484432801]
We introduce commonsense knowledge transfer, a framework to transfer the commonsense knowledge stored in a neural commonsense knowledge model to a general-purpose pre-trained language model.
It first exploits general texts to form queries for extracting commonsense knowledge from the neural commonsense knowledge model.
It then refines the language model with two self-supervised objectives: commonsense mask infilling and commonsense relation prediction.
arXiv Detail & Related papers (2023-06-04T15:44:51Z) - Deriving dynamical systems for language based on the Tolerance Principle [91.3755431537592]
I derive explicit dynamical systems for language within an acquisition-driven framework.
I consider different theoretical parameters such as population size (finite vs. infinite) and the number of previous generations that provide learners with data.
arXiv Detail & Related papers (2022-09-09T11:49:55Z) - Commonsense Knowledge-Augmented Pretrained Language Models for Causal
Reasoning Classification [9.313899406300644]
We triples in ATOMIC2020, a wide coverage commonsense reasoning knowledge graph, to verbalize natural language text.
We evaluate the resulting model on answering commonsense reasoning questions.
arXiv Detail & Related papers (2021-12-16T04:38:40Z) - Automatic Knowledge Augmentation for Generative Commonsense Reasoning [1.1374578778690623]
Generative commonsense reasoning is the capability of a language model to generate a sentence with a given concept-set that is based on commonsense knowledge.
We propose a data-centric method that uses automatic knowledge augmentation to extend commonsense knowledge using a machine knowledge generator.
arXiv Detail & Related papers (2021-10-30T06:53:48Z) - Generated Knowledge Prompting for Commonsense Reasoning [53.88983683513114]
We propose generating knowledge statements directly from a language model with a generic prompt format.
This approach improves performance of both off-the-shelf and finetuned language models on four commonsense reasoning tasks.
Notably, we find that a model's predictions can improve when using its own generated knowledge.
arXiv Detail & Related papers (2021-10-15T21:58:03Z) - GATE: Graph Attention Transformer Encoder for Cross-lingual Relation and
Event Extraction [107.8262586956778]
We introduce graph convolutional networks (GCNs) with universal dependency parses to learn language-agnostic sentence representations.
GCNs struggle to model words with long-range dependencies or are not directly connected in the dependency tree.
We propose to utilize the self-attention mechanism to learn the dependencies between words with different syntactic distances.
arXiv Detail & Related papers (2020-10-06T20:30:35Z) - KG-BART: Knowledge Graph-Augmented BART for Generative Commonsense
Reasoning [78.81080813406177]
We propose a novel knowledge graph augmented pre-trained language generation model KG-BART.
KG-BART encompasses the complex relations of concepts through the knowledge graph and produces more logical and natural sentences as output.
arXiv Detail & Related papers (2020-09-26T19:57:49Z) - Language Generation with Multi-Hop Reasoning on Commonsense Knowledge
Graph [124.45799297285083]
We argue that exploiting both the structural and semantic information of the knowledge graph facilitates commonsense-aware text generation.
We propose Generation with Multi-Hop Reasoning Flow (GRF) that enables pre-trained models with dynamic multi-hop reasoning on multi-relational paths extracted from the external commonsense knowledge graph.
arXiv Detail & Related papers (2020-09-24T13:55:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.