MICO: A Multi-alternative Contrastive Learning Framework for Commonsense
Knowledge Representation
- URL: http://arxiv.org/abs/2210.07570v1
- Date: Fri, 14 Oct 2022 06:51:21 GMT
- Title: MICO: A Multi-alternative Contrastive Learning Framework for Commonsense
Knowledge Representation
- Authors: Ying Su, Zihao Wang, Tianqing Fang, Hongming Zhang, Yangqiu Song, Tong
Zhang
- Abstract summary: MICO is a multi-alternative contrastve learning framework on COmmonsense knowledge graphs.
It generates the commonsense knowledge representation by contextual interaction between entity nodes.
It can benefit the following two tasks by simply comparing the distance score between the representations.
- Score: 52.238466443561705
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Commonsense reasoning tasks such as commonsense knowledge graph completion
and commonsense question answering require powerful representation learning. In
this paper, we propose to learn commonsense knowledge representation by MICO, a
Multi-alternative contrastve learning framework on COmmonsense knowledge graphs
(MICO). MICO generates the commonsense knowledge representation by contextual
interaction between entity nodes and relations with multi-alternative
contrastive learning. In MICO, the head and tail entities in an $(h,r,t)$
knowledge triple are converted to two relation-aware sequence pairs (a premise
and an alternative) in the form of natural language. Semantic representations
generated by MICO can benefit the following two tasks by simply comparing the
distance score between the representations: 1) zero-shot commonsense question
answering task; 2) inductive commonsense knowledge graph completion task.
Extensive experiments show the effectiveness of our method.
Related papers
- Visual Commonsense based Heterogeneous Graph Contrastive Learning [79.22206720896664]
We propose a heterogeneous graph contrastive learning method to better finish the visual reasoning task.
Our method is designed as a plug-and-play way, so that it can be quickly and easily combined with a wide range of representative methods.
arXiv Detail & Related papers (2023-11-11T12:01:18Z) - Adversarial Transformer Language Models for Contextual Commonsense
Inference [14.12019824666882]
Contextualized or discourse aware commonsense inference is the task of generating coherent commonsense assertions.
Some problems with the task are: lack of controllability for topics of the inferred facts; lack of commonsense knowledge during training.
We develop techniques to address the aforementioned problems in the task.
arXiv Detail & Related papers (2023-02-10T18:21:13Z) - CIKQA: Learning Commonsense Inference with a Unified
Knowledge-in-the-loop QA Paradigm [120.98789964518562]
We argue that due to the large scale of commonsense knowledge, it is infeasible to annotate a large enough training set for each task to cover all commonsense for learning.
We focus on investigating models' commonsense inference capabilities from two perspectives.
We name the benchmark as Commonsense Inference with Knowledge-in-the-loop Question Answering (CIKQA)
arXiv Detail & Related papers (2022-10-12T14:32:39Z) - MuKEA: Multimodal Knowledge Extraction and Accumulation for
Knowledge-based Visual Question Answering [23.628740943735167]
We propose MuKEA to represent multimodal knowledge by an explicit triplet to correlate visual objects and fact answers with implicit relations.
By adopting a pre-training and fine-tuning learning strategy, both basic and domain-specific multimodal knowledge are progressively accumulated for answer prediction.
arXiv Detail & Related papers (2022-03-17T07:42:14Z) - One-shot Scene Graph Generation [130.57405850346836]
We propose Multiple Structured Knowledge (Relational Knowledgesense Knowledge) for the one-shot scene graph generation task.
Our method significantly outperforms existing state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2022-02-22T11:32:59Z) - GreaseLM: Graph REASoning Enhanced Language Models for Question
Answering [159.9645181522436]
GreaseLM is a new model that fuses encoded representations from pretrained LMs and graph neural networks over multiple layers of modality interaction operations.
We show that GreaseLM can more reliably answer questions that require reasoning over both situational constraints and structured knowledge, even outperforming models 8x larger.
arXiv Detail & Related papers (2022-01-21T19:00:05Z) - Knowledge Graph Augmented Network Towards Multiview Representation
Learning for Aspect-based Sentiment Analysis [96.53859361560505]
We propose a knowledge graph augmented network (KGAN) to incorporate external knowledge with explicitly syntactic and contextual information.
KGAN captures the sentiment feature representations from multiple perspectives, i.e., context-, syntax- and knowledge-based.
Experiments on three popular ABSA benchmarks demonstrate the effectiveness and robustness of our KGAN.
arXiv Detail & Related papers (2022-01-13T08:25:53Z) - Commonsense Knowledge in Word Associations and ConceptNet [37.751909219863585]
This paper presents an in-depth comparison of two large-scale resources of general knowledge: ConcpetNet and SWOW.
We examine the structure, overlap and differences between the two graphs, as well as the extent to which they encode situational commonsense knowledge.
arXiv Detail & Related papers (2021-09-20T06:06:30Z) - An Adversarial Transfer Network for Knowledge Representation Learning [11.013390624382257]
We propose an adversarial embedding transfer network ATransN, which transfers knowledge from one or more teacher knowledge graphs to a target one.
Specifically, we add soft constraints on aligned entity pairs and neighbours to the existing knowledge representation learning methods.
arXiv Detail & Related papers (2021-04-30T05:07:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.