Dynamic Multi-View Fusion Mechanism For Chinese Relation Extraction
- URL: http://arxiv.org/abs/2303.05082v1
- Date: Thu, 9 Mar 2023 07:35:31 GMT
- Title: Dynamic Multi-View Fusion Mechanism For Chinese Relation Extraction
- Authors: Jing Yang, Bin Ji, Shasha Li, Jun Ma, Long Peng, and Jie Yu
- Abstract summary: We propose a mixture-of-view-experts framework (MoVE) to dynamically learn multi-view features for Chinese relation extraction.
With both the internal and external knowledge of Chinese characters, our framework can better capture the semantic information of Chinese characters.
- Score: 12.818297160055584
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, many studies incorporate external knowledge into character-level
feature based models to improve the performance of Chinese relation extraction.
However, these methods tend to ignore the internal information of the Chinese
character and cannot filter out the noisy information of external knowledge. To
address these issues, we propose a mixture-of-view-experts framework (MoVE) to
dynamically learn multi-view features for Chinese relation extraction. With
both the internal and external knowledge of Chinese characters, our framework
can better capture the semantic information of Chinese characters. To
demonstrate the effectiveness of the proposed framework, we conduct extensive
experiments on three real-world datasets in distinct domains. Experimental
results show consistent and significant superiority and robustness of our
proposed framework. Our code and dataset will be released at:
https://gitee.com/tmg-nudt/multi-view-of-expert-for-chineserelation-extraction
Related papers
- Towards Retrieval-Augmented Architectures for Image Captioning [81.11529834508424]
This work presents a novel approach towards developing image captioning models that utilize an external kNN memory to improve the generation process.
Specifically, we propose two model variants that incorporate a knowledge retriever component that is based on visual similarities.
We experimentally validate our approach on COCO and nocaps datasets and demonstrate that incorporating an explicit external memory can significantly enhance the quality of captions.
arXiv Detail & Related papers (2024-05-21T18:02:07Z) - 3AM: An Ambiguity-Aware Multi-Modal Machine Translation Dataset [90.95948101052073]
We introduce 3AM, an ambiguity-aware MMT dataset comprising 26,000 parallel sentence pairs in English and Chinese.
Our dataset is specifically designed to include more ambiguity and a greater variety of both captions and images than other MMT datasets.
Experimental results show that MMT models trained on our dataset exhibit a greater ability to exploit visual information than those trained on other MMT datasets.
arXiv Detail & Related papers (2024-04-29T04:01:30Z) - YAYI-UIE: A Chat-Enhanced Instruction Tuning Framework for Universal Information Extraction [20.32778991187863]
We propose an end-to-end chat-enhanced instruction tuning framework for universal information extraction (YAYI-UIE)
Specifically, we utilize dialogue data and information extraction data to enhance the information extraction performance jointly.
arXiv Detail & Related papers (2023-12-24T21:33:03Z) - Multi-Grained Multimodal Interaction Network for Entity Linking [65.30260033700338]
Multimodal entity linking task aims at resolving ambiguous mentions to a multimodal knowledge graph.
We propose a novel Multi-GraIned Multimodal InteraCtion Network $textbf(MIMIC)$ framework for solving the MEL task.
arXiv Detail & Related papers (2023-07-19T02:11:19Z) - Information Screening whilst Exploiting! Multimodal Relation Extraction
with Feature Denoising and Multimodal Topic Modeling [96.75821232222201]
Existing research on multimodal relation extraction (MRE) faces two co-existing challenges, internal-information over-utilization and external-information under-exploitation.
We propose a novel framework that simultaneously implements the idea of internal-information screening and external-information exploiting.
arXiv Detail & Related papers (2023-05-19T14:56:57Z) - Improving Chinese Named Entity Recognition by Search Engine Augmentation [2.971423962840551]
We propose a neural-based approach to perform semantic augmentation using external knowledge from search engine for Chinese NER.
In particular, a multi-channel semantic fusion model is adopted to generate the augmented input representations, which aggregates external related texts retrieved from the search engine.
arXiv Detail & Related papers (2022-10-23T08:42:05Z) - Retrieval-Augmented Transformer for Image Captioning [51.79146669195357]
We develop an image captioning approach with a kNN memory, with which knowledge can be retrieved from an external corpus to aid the generation process.
Our architecture combines a knowledge retriever based on visual similarities, a differentiable encoder, and a kNN-augmented attention layer to predict tokens.
Experimental results, conducted on the COCO dataset, demonstrate that employing an explicit external memory can aid the generation process and increase caption quality.
arXiv Detail & Related papers (2022-07-26T19:35:49Z) - Knowledge Graph Augmented Network Towards Multiview Representation
Learning for Aspect-based Sentiment Analysis [96.53859361560505]
We propose a knowledge graph augmented network (KGAN) to incorporate external knowledge with explicitly syntactic and contextual information.
KGAN captures the sentiment feature representations from multiple perspectives, i.e., context-, syntax- and knowledge-based.
Experiments on three popular ABSA benchmarks demonstrate the effectiveness and robustness of our KGAN.
arXiv Detail & Related papers (2022-01-13T08:25:53Z) - Syntactic-GCN Bert based Chinese Event Extraction [2.3104000011280403]
We propose an integrated framework to perform Chinese event extraction.
The proposed approach is a multiple channel input neural framework that integrates semantic features and syntactic features.
Experimental results show that the proposed method outperforms the benchmark approaches significantly.
arXiv Detail & Related papers (2021-12-18T14:07:54Z) - MECT: Multi-Metadata Embedding based Cross-Transformer for Chinese Named
Entity Recognition [21.190288516462704]
This paper presents a novel Multi-metadata Embedding based Cross-Transformer (MECT) to improve the performance of Chinese NER.
Specifically, we use multi-metadata embedding in a two-stream Transformer to integrate Chinese character features with the radical-level embedding.
With the structural characteristics of Chinese characters, MECT can better capture the semantic information of Chinese characters for NER.
arXiv Detail & Related papers (2021-07-12T13:39:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.