KEML: A Knowledge-Enriched Meta-Learning Framework for Lexical Relation
Classification
- URL: http://arxiv.org/abs/2002.10903v2
- Date: Thu, 3 Dec 2020 02:34:19 GMT
- Title: KEML: A Knowledge-Enriched Meta-Learning Framework for Lexical Relation
Classification
- Authors: Chengyu Wang, Minghui Qiu, Jun Huang, Xiaofeng He
- Abstract summary: Lexical relations describe how concepts are semantically related, in the form of relation triples.
We propose the Knowledge-Enriched Meta-Learning framework to address the task of lexical relation classification.
- Score: 37.2106265998237
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Lexical relations describe how concepts are semantically related, in the form
of relation triples. The accurate prediction of lexical relations between
concepts is challenging, due to the sparsity of patterns indicating the
existence of such relations. We propose the Knowledge-Enriched Meta-Learning
(KEML) framework to address the task of lexical relation classification. In
KEML, the LKB-BERT (Lexical Knowledge Base-BERT) model is presented to learn
concept representations from massive text corpora, with rich lexical knowledge
injected by distant supervision. A probabilistic distribution of auxiliary
tasks is defined to increase the model's ability to recognize different types
of lexical relations. We further combine a meta-learning process over the
auxiliary task distribution and supervised learning to train the neural lexical
relation classifier. Experiments over multiple datasets show that KEML
outperforms state-of-the-art methods.
Related papers
- CLLMRec: LLM-powered Cognitive-Aware Concept Recommendation via Semantic Alignment and Prerequisite Knowledge Distillation [3.200298153814017]
The growth of Massive Open Online Courses (MOOCs) presents significant challenges for personalized learning, where concept is crucial.<n>Existing approaches typically rely on heterogeneous information networks or knowledge graphs to capture conceptual relationships, combined with knowledge tracing models to assess learners' cognitive states.<n>This paper proposes CLLMRec, a novel framework that leverages Large Language Models to generate personalized concept recommendations.
arXiv Detail & Related papers (2025-11-21T08:37:39Z) - Contrastive Cross-Course Knowledge Tracing via Concept Graph Guided Knowledge Transfer [12.34590941832835]
We propose TransKT, a contrastive cross-course knowledge tracing method.<n>It builds on concept graph guided knowledge transfer to model the relationships between learning behaviors across different courses.<n>TransKT employs a contrastive objective that aligns single-course and cross-course knowledge states.
arXiv Detail & Related papers (2025-05-14T10:38:30Z) - A Zero-shot Learning Method Based on Large Language Models for Multi-modal Knowledge Graph Embedding [8.56384109338971]
Zero-shot learning (ZL) is crucial for tasks involving unseen categories, such as natural language processing, image classification, and cross-lingual transfer.
We proposeZSLLM, a framework for zero-shot embedding learning of MMKGs using largelanguage models (LLMs)
arXiv Detail & Related papers (2025-03-10T11:38:21Z) - RelationVLM: Making Large Vision-Language Models Understand Visual Relations [66.70252936043688]
We present RelationVLM, a large vision-language model capable of comprehending various levels and types of relations whether across multiple images or within a video.
Specifically, we devise a multi-stage relation-aware training scheme and a series of corresponding data configuration strategies to bestow RelationVLM with the capabilities of understanding semantic relations.
arXiv Detail & Related papers (2024-03-19T15:01:19Z) - Identifying Semantic Induction Heads to Understand In-Context Learning [103.00463655766066]
We investigate whether attention heads encode two types of relationships between tokens present in natural languages.
We find that certain attention heads exhibit a pattern where, when attending to head tokens, they recall tail tokens and increase the output logits of those tail tokens.
arXiv Detail & Related papers (2024-02-20T14:43:39Z) - Modeling Balanced Explicit and Implicit Relations with Contrastive
Learning for Knowledge Concept Recommendation in MOOCs [1.0377683220196874]
Existing methods rely on the explicit relations between users and knowledge concepts for recommendation.
There are numerous implicit relations generated within the users' learning activities on the MOOC platforms.
We propose a novel framework based on contrastive learning, which can represent and balance the explicit and implicit relations.
arXiv Detail & Related papers (2024-02-13T07:12:44Z) - Prompt-based Logical Semantics Enhancement for Implicit Discourse
Relation Recognition [4.7938839332508945]
We propose a Prompt-based Logical Semantics Enhancement (PLSE) method for Implicit Discourse Relation Recognition (IDRR)
Our method seamlessly injects knowledge relevant to discourse relation into pre-trained language models through prompt-based connective prediction.
Experimental results on PDTB 2.0 and CoNLL16 datasets demonstrate that our method achieves outstanding and consistent performance against the current state-of-the-art models.
arXiv Detail & Related papers (2023-11-01T08:38:08Z) - Link-Context Learning for Multimodal LLMs [40.923816691928536]
Link-context learning (LCL) emphasizes "reasoning from cause and effect" to augment the learning capabilities of MLLMs.
LCL guides the model to discern not only the analogy but also the underlying causal associations between data points.
To facilitate the evaluation of this novel approach, we introduce the ISEKAI dataset.
arXiv Detail & Related papers (2023-08-15T17:33:24Z) - Knowledge-Enhanced Hierarchical Information Correlation Learning for
Multi-Modal Rumor Detection [82.94413676131545]
We propose a novel knowledge-enhanced hierarchical information correlation learning approach (KhiCL) for multi-modal rumor detection.
KhiCL exploits cross-modal joint dictionary to transfer the heterogeneous unimodality features into the common feature space.
It extracts visual and textual entities from images and text, and designs a knowledge relevance reasoning strategy.
arXiv Detail & Related papers (2023-06-28T06:08:20Z) - Imposing Relation Structure in Language-Model Embeddings Using
Contrastive Learning [30.00047118880045]
We propose a novel contrastive learning framework that trains sentence embeddings to encode the relations in a graph structure.
The resulting relation-aware sentence embeddings achieve state-of-the-art results on the relation extraction task.
arXiv Detail & Related papers (2021-09-02T10:58:27Z) - PPKE: Knowledge Representation Learning by Path-based Pre-training [43.41597219004598]
We propose a Path-based Pre-training model to learn Knowledge Embeddings, called PPKE.
Our model achieves state-of-the-art results on several benchmark datasets for link prediction and relation prediction tasks.
arXiv Detail & Related papers (2020-12-07T10:29:30Z) - Concept Learners for Few-Shot Learning [76.08585517480807]
We propose COMET, a meta-learning method that improves generalization ability by learning to learn along human-interpretable concept dimensions.
We evaluate our model on few-shot tasks from diverse domains, including fine-grained image classification, document categorization and cell type annotation.
arXiv Detail & Related papers (2020-07-14T22:04:17Z) - Inferential Text Generation with Multiple Knowledge Sources and
Meta-Learning [117.23425857240679]
We study the problem of generating inferential texts of events for a variety of commonsense like textitif-else relations.
Existing approaches typically use limited evidence from training examples and learn for each relation individually.
In this work, we use multiple knowledge sources as fuels for the model.
arXiv Detail & Related papers (2020-04-07T01:49:18Z) - Generative Adversarial Zero-Shot Relational Learning for Knowledge
Graphs [96.73259297063619]
We consider a novel formulation, zero-shot learning, to free this cumbersome curation.
For newly-added relations, we attempt to learn their semantic features from their text descriptions.
We leverage Generative Adrial Networks (GANs) to establish the connection between text and knowledge graph domain.
arXiv Detail & Related papers (2020-01-08T01:19:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.