Enhancing In-Context Learning with Semantic Representations for Relation Extraction
- URL: http://arxiv.org/abs/2406.10432v1
- Date: Fri, 14 Jun 2024 22:36:08 GMT
- Title: Enhancing In-Context Learning with Semantic Representations for Relation Extraction
- Authors: Peitao Han, Lis Kanashiro Pereira, Fei Cheng, Wan Jou She, Eiji Aramaki,
- Abstract summary: We employ two AMR-enhanced semantic representations for ICL on RE.
In both cases, we demonstrate that all settings benefit from the fine-grained AMR's semantic structure.
We evaluate our model on four RE datasets.
- Score: 9.12646853282321
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we employ two AMR-enhanced semantic representations for ICL on RE: one that explores the AMR structure generated for a sentence at the subgraph level (shortest AMR path), and another that explores the full AMR structure generated for a sentence. In both cases, we demonstrate that all settings benefit from the fine-grained AMR's semantic structure. We evaluate our model on four RE datasets. Our results show that our model can outperform the GPT-based baselines, and achieve SOTA performance on two of the datasets, and competitive performance on the other two.
Related papers
- Bilateral Reference for High-Resolution Dichotomous Image Segmentation [109.35828258964557]
We introduce a novel bilateral reference framework (BiRefNet) for high-resolution dichotomous image segmentation (DIS)
It comprises two essential components: the localization module (LM) and the reconstruction module (RM) with our proposed bilateral reference (BiRef)
Within the RM, we utilize BiRef for the reconstruction process, where hierarchical patches of images provide the source reference and gradient maps serve as the target reference.
arXiv Detail & Related papers (2024-01-07T07:56:47Z) - AMR Parsing with Causal Hierarchical Attention and Pointers [54.382865897298046]
We introduce new target forms of AMR parsing and a novel model, CHAP, which is equipped with causal hierarchical attention and the pointer mechanism.
Experiments show that our model outperforms baseline models on four out of five benchmarks in the setting of no additional data.
arXiv Detail & Related papers (2023-10-18T13:44:26Z) - An AMR-based Link Prediction Approach for Document-level Event Argument
Extraction [51.77733454436013]
Recent works have introduced Abstract Meaning Representation (AMR) for Document-level Event Argument Extraction (Doc-level EAE)
This work reformulates EAE as a link prediction problem on AMR graphs.
We propose a novel graph structure, Tailored AMR Graph (TAG), which compresses less informative subgraphs and edge types, integrates span information, and highlights surrounding events in the same document.
arXiv Detail & Related papers (2023-05-30T16:07:48Z) - Structure-CLIP: Towards Scene Graph Knowledge to Enhance Multi-modal
Structured Representations [70.41385310930846]
We present an end-to-end framework Structure-CLIP to enhance multi-modal structured representations.
We use scene graphs to guide the construction of semantic negative examples, which results in an increased emphasis on learning structured representations.
A Knowledge-Enhance (KEE) is proposed to leverage SGK as input to further enhance structured representations.
arXiv Detail & Related papers (2023-05-06T03:57:05Z) - Retrofitting Multilingual Sentence Embeddings with Abstract Meaning
Representation [70.58243648754507]
We introduce a new method to improve existing multilingual sentence embeddings with Abstract Meaning Representation (AMR)
Compared with the original textual input, AMR is a structured semantic representation that presents the core concepts and relations in a sentence explicitly and unambiguously.
Experiment results show that retrofitting multilingual sentence embeddings with AMR leads to better state-of-the-art performance on both semantic similarity and transfer tasks.
arXiv Detail & Related papers (2022-10-18T11:37:36Z) - ATP: AMRize Then Parse! Enhancing AMR Parsing with PseudoAMRs [34.55175412186001]
auxiliary tasks which are semantically or formally related can better enhance AMR parsing.
From an empirical perspective, we propose a principled method to involve auxiliary tasks to boost AMR parsing.
arXiv Detail & Related papers (2022-04-19T13:15:59Z) - Consistent Training and Decoding For End-to-end Speech Recognition Using
Lattice-free MMI [67.13999010060057]
We propose a novel approach to integrate LF-MMI criterion into E2E ASR frameworks in both training and decoding stages.
Experiments suggest that the introduction of the LF-MMI criterion consistently leads to significant performance improvements.
arXiv Detail & Related papers (2021-12-05T07:30:17Z) - Hierarchical Curriculum Learning for AMR Parsing [29.356258263403646]
Flat sentence-to-AMR training impedes the representation learning of concepts and relations in the deeper AMR sub-graph.
We propose a hierarchical curriculum learning (HCL) which consists of structure-level curriculum (SC) and instance-level curriculum (IC)
arXiv Detail & Related papers (2021-10-15T04:45:15Z) - Probabilistic, Structure-Aware Algorithms for Improved Variety,
Accuracy, and Coverage of AMR Alignments [9.74672460306765]
We present algorithms for aligning components of Abstract Meaning Representation (AMR) spans in English sentences.
We leverage unsupervised learning in combination with graphs, taking the best of both worlds from previous AMR.
Our approach covers a wider variety of AMR substructures than previously considered, achieves higher coverage of nodes and edges, and does so with higher accuracy.
arXiv Detail & Related papers (2021-06-10T18:46:32Z) - Pushing the Limits of AMR Parsing with Self-Learning [24.998016423211375]
We show how trained models can be applied to improve AMR parsing performance.
We show that without any additional human annotations, these techniques improve an already performant and achieve state-of-the-art results.
arXiv Detail & Related papers (2020-10-20T23:45:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.