LearnDA: Learnable Knowledge-Guided Data Augmentation for Event
Causality Identification
- URL: http://arxiv.org/abs/2106.01649v1
- Date: Thu, 3 Jun 2021 07:42:20 GMT
- Title: LearnDA: Learnable Knowledge-Guided Data Augmentation for Event
Causality Identification
- Authors: Xinyu Zuo, Pengfei Cao, Yubo Chen, Kang Liu, Jun Zhao, Weihua Peng and
Yuguang Chen
- Abstract summary: We introduce a new approach to augment training data for event causality identification.
Our approach is knowledge-guided, which can leverage existing knowledge bases to generate well-formed new sentences.
On the other hand, our approach employs a dual mechanism, which is a learnable augmentation framework and can interactively adjust the generation process to generate task-related sentences.
- Score: 17.77752074834281
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Modern models for event causality identification (ECI) are mainly based on
supervised learning, which are prone to the data lacking problem.
Unfortunately, the existing NLP-related augmentation methods cannot directly
produce the available data required for this task. To solve the data lacking
problem, we introduce a new approach to augment training data for event
causality identification, by iteratively generating new examples and
classifying event causality in a dual learning framework. On the one hand, our
approach is knowledge-guided, which can leverage existing knowledge bases to
generate well-formed new sentences. On the other hand, our approach employs a
dual mechanism, which is a learnable augmentation framework and can
interactively adjust the generation process to generate task-related sentences.
Experimental results on two benchmarks EventStoryLine and Causal-TimeBank show
that 1) our method can augment suitable task-related training data for ECI; 2)
our method outperforms previous methods on EventStoryLine and Causal-TimeBank
(+2.5 and +2.1 points on F1 value respectively).
Related papers
- Joint Input and Output Coordination for Class-Incremental Learning [84.36763449830812]
We propose a joint input and output coordination (JIOC) mechanism to address these issues.
This mechanism assigns different weights to different categories of data according to the gradient of the output score.
It can be incorporated into different incremental learning approaches that use memory storage.
arXiv Detail & Related papers (2024-09-09T13:55:07Z) - A Unified Framework for Continual Learning and Machine Unlearning [9.538733681436836]
Continual learning and machine unlearning are crucial challenges in machine learning, typically addressed separately.
We introduce a novel framework that jointly tackles both tasks by leveraging controlled knowledge distillation.
Our approach enables efficient learning with minimal forgetting and effective targeted unlearning.
arXiv Detail & Related papers (2024-08-21T06:49:59Z) - Investigating the Pre-Training Dynamics of In-Context Learning: Task Recognition vs. Task Learning [99.05401042153214]
In-context learning (ICL) is potentially attributed to two major abilities: task recognition (TR) and task learning (TL)
We take the first step by examining the pre-training dynamics of the emergence of ICL.
We propose a simple yet effective method to better integrate these two abilities for ICL at inference time.
arXiv Detail & Related papers (2024-06-20T06:37:47Z) - Adaptive Rentention & Correction for Continual Learning [114.5656325514408]
A common problem in continual learning is the classification layer's bias towards the most recent task.
We name our approach Adaptive Retention & Correction (ARC)
ARC achieves an average performance increase of 2.7% and 2.6% on the CIFAR-100 and Imagenet-R datasets.
arXiv Detail & Related papers (2024-05-23T08:43:09Z) - Class-Incremental Few-Shot Event Detection [68.66116956283575]
This paper proposes a new task, called class-incremental few-shot event detection.
This task faces two problems, i.e., old knowledge forgetting and new class overfitting.
To solve these problems, this paper presents a novel knowledge distillation and prompt learning based method, called Prompt-KD.
arXiv Detail & Related papers (2024-04-02T09:31:14Z) - Informed Meta-Learning [55.2480439325792]
Meta-learning and informed ML stand out as two approaches for incorporating prior knowledge into ML pipelines.
We formalise a hybrid paradigm, informed meta-learning, facilitating the incorporation of priors from unstructured knowledge representations.
We demonstrate the potential benefits of informed meta-learning in improving data efficiency, robustness to observational noise and task distribution shifts.
arXiv Detail & Related papers (2024-02-25T15:08:37Z) - Information Association for Language Model Updating by Mitigating
LM-Logical Discrepancy [68.31760483418901]
Large Language Models(LLMs) struggle with providing current information due to the outdated pre-training data.
Existing methods for updating LLMs, such as knowledge editing and continual fine-tuning, have significant drawbacks in generalizability of new information.
We identify the core challenge behind these drawbacks: the LM-logical discrepancy featuring the difference between language modeling probabilities and logical probabilities.
arXiv Detail & Related papers (2023-05-29T19:48:37Z) - Continual Few-shot Relation Learning via Embedding Space Regularization
and Data Augmentation [4.111899441919165]
It is necessary for the model to learn novel relational patterns with very few labeled data while avoiding catastrophic forgetting of previous task knowledge.
We propose a novel method based on embedding space regularization and data augmentation.
Our method generalizes to new few-shot tasks and avoids catastrophic forgetting of previous tasks by enforcing extra constraints on the relational embeddings and by adding extra relevant data in a self-supervised manner.
arXiv Detail & Related papers (2022-03-04T05:19:09Z) - Improving Event Causality Identification via Self-Supervised
Representation Learning on External Causal Statement [17.77752074834281]
We propose CauSeRL, which leverages external causal statements for event causality identification.
First of all, we design a self-supervised framework to learn context-specific causal patterns from external causal statements.
We adopt a contrastive transfer strategy to incorporate the learned context-specific causal patterns into the target ECI model.
arXiv Detail & Related papers (2021-06-03T07:50:50Z) - KnowDis: Knowledge Enhanced Data Augmentation for Event Causality
Detection via Distant Supervision [23.533310981207446]
We investigate a data augmentation framework for event causality detection (ECD) dubbed as Knowledge Enhanced Distant Data Augmentation (KnowDis)
KnowDis can augment available training data assisted with the lexical and causal commonsense knowledge for ECD via distant supervision.
Our method outperforms previous methods by a large margin assisted with automatically labeled training data.
arXiv Detail & Related papers (2020-10-21T08:44:54Z) - Incremental Learning for End-to-End Automatic Speech Recognition [41.297106772785206]
We propose an incremental learning method for end-to-end Automatic Speech Recognition (ASR)
We design a novel explainability-based knowledge distillation for ASR models, which is combined with a response-based knowledge distillation to maintain the original model's predictions and the "reason" for the predictions.
Results on a multi-stage sequential training task show that our method outperforms existing ones in mitigating forgetting.
arXiv Detail & Related papers (2020-05-11T08:18:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.