Cross-Domain Contract Element Extraction with a Bi-directional Feedback
Clause-Element Relation Network
- URL: http://arxiv.org/abs/2105.06083v1
- Date: Thu, 13 May 2021 05:14:36 GMT
- Title: Cross-Domain Contract Element Extraction with a Bi-directional Feedback
Clause-Element Relation Network
- Authors: Zihan Wang, Hongye Song, Zhaochun Ren, Pengjie Ren, Zhumin Chen,
Xiaozhong Liu, Hongsong Li, Maarten de Rijke
- Abstract summary: Bi-directional Feedback cLause-Element relaTion network (Bi-FLEET) is proposed for the cross-domain contract element extraction task.
Bi-FLEET has three main components: (1) a context encoder, (2) a clause-element relation encoder, and (3) an inference layer.
The experimental results over both cross-domain NER and CEE tasks show that Bi-FLEET significantly outperforms state-of-the-art baselines.
- Score: 70.00960496773938
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Contract element extraction (CEE) is the novel task of automatically
identifying and extracting legally relevant elements such as contract dates,
payments, and legislation references from contracts. Automatic methods for this
task view it as a sequence labeling problem and dramatically reduce human
labor. However, as contract genres and element types may vary widely, a
significant challenge for this sequence labeling task is how to transfer
knowledge from one domain to another, i.e., cross-domain CEE. Cross-domain CEE
differs from cross-domain named entity recognition (NER) in two important ways.
First, contract elements are far more fine-grained than named entities, which
hinders the transfer of extractors. Second, the extraction zones for
cross-domain CEE are much larger than for cross-domain NER. As a result, the
contexts of elements from different domains can be more diverse. We propose a
framework, the Bi-directional Feedback cLause-Element relaTion network
(Bi-FLEET), for the cross-domain CEE task that addresses the above challenges.
Bi-FLEET has three main components: (1) a context encoder, (2) a clause-element
relation encoder, and (3) an inference layer. To incorporate invariant
knowledge about element and clause types, a clause-element graph is constructed
across domains and a hierarchical graph neural network is adopted in the
clause-element relation encoder. To reduce the influence of context variations,
a multi-task framework with a bi-directional feedback scheme is designed in the
inference layer, conducting both clause classification and element extraction.
The experimental results over both cross-domain NER and CEE tasks show that
Bi-FLEET significantly outperforms state-of-the-art baselines.
Related papers
- Label Alignment and Reassignment with Generalist Large Language Model for Enhanced Cross-Domain Named Entity Recognition [0.0]
Cross-domain named entity recognition still poses a challenge for most NER methods.
We introduce a label alignment and reassignment approach, namely LAR, to address this issue.
We conduct an extensive range of experiments on NER datasets involving both supervised and zero-shot scenarios.
arXiv Detail & Related papers (2024-07-24T15:13:12Z) - Learning with Alignments: Tackling the Inter- and Intra-domain Shifts for Cross-multidomain Facial Expression Recognition [16.864390181629044]
We propose a novel Learning with Alignments CMFER framework, named LA-CMFER, to handle both inter- and intra-domain shifts.
Based on this, LA-CMFER presents a dual-level inter-domain alignment method to force the model to prioritize hard-to-align samples in knowledge transfer.
To address the intra-domain shifts, LA-CMFER introduces a multi-view intra-domain alignment method with a multi-view consistency constraint.
arXiv Detail & Related papers (2024-07-08T07:43:06Z) - Joint Identifiability of Cross-Domain Recommendation via Hierarchical Subspace Disentanglement [19.29182848154183]
Cross-Domain Recommendation (CDR) seeks to enable effective knowledge transfer across domains.
While CDR describes user representations as a joint distribution over two domains, these methods fail to account for its joint identifiability.
We propose a Hierarchical subspace disentanglement approach to explore the Joint IDentifiability of cross-domain joint distribution.
arXiv Detail & Related papers (2024-04-06T03:11:31Z) - Semantic Connectivity-Driven Pseudo-labeling for Cross-domain
Segmentation [89.41179071022121]
Self-training is a prevailing approach in cross-domain semantic segmentation.
We propose a novel approach called Semantic Connectivity-driven pseudo-labeling.
This approach formulates pseudo-labels at the connectivity level and thus can facilitate learning structured and low-noise semantics.
arXiv Detail & Related papers (2023-12-11T12:29:51Z) - Object Segmentation by Mining Cross-Modal Semantics [68.88086621181628]
We propose a novel approach by mining the Cross-Modal Semantics to guide the fusion and decoding of multimodal features.
Specifically, we propose a novel network, termed XMSNet, consisting of (1) all-round attentive fusion (AF), (2) coarse-to-fine decoder (CFD), and (3) cross-layer self-supervision.
arXiv Detail & Related papers (2023-05-17T14:30:11Z) - Bidirectional Generative Framework for Cross-domain Aspect-based
Sentiment Analysis [68.742820522137]
Cross-domain aspect-based sentiment analysis (ABSA) aims to perform various fine-grained sentiment analysis tasks on a target domain by transferring knowledge from a source domain.
We propose a unified bidirectional generative framework to tackle various cross-domain ABSA tasks.
Our framework trains a generative model in both text-to-label and label-to-text directions.
arXiv Detail & Related papers (2023-05-16T15:02:23Z) - Subsidiary Prototype Alignment for Universal Domain Adaptation [58.431124236254]
A major problem in Universal Domain Adaptation (UniDA) is misalignment of "known" and "unknown" classes.
We propose a novel word-histogram-related pretext task to enable closed-set SPA, operating in conjunction with goal task UniDA.
We demonstrate the efficacy of our approach on top of existing UniDA techniques, yielding state-of-the-art performance across three standard UniDA and Open-Set DA object recognition benchmarks.
arXiv Detail & Related papers (2022-10-28T05:32:14Z) - DecoupleNet: Decoupled Network for Domain Adaptive Semantic Segmentation [78.30720731968135]
Unsupervised domain adaptation in semantic segmentation has been raised to alleviate the reliance on expensive pixel-wise annotations.
We propose DecoupleNet that alleviates source domain overfitting and enables the final model to focus more on the segmentation task.
We also put forward Self-Discrimination (SD) and introduce an auxiliary classifier to learn more discriminative target domain features with pseudo labels.
arXiv Detail & Related papers (2022-07-20T15:47:34Z) - CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation [44.06904757181245]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a labeled source domain to a different unlabeled target domain.
One fundamental problem for the category level based UDA is the production of pseudo labels for samples in target domain.
We design a two-way center-aware labeling algorithm to produce pseudo labels for target samples.
Along with the pseudo labels, a weight-sharing triple-branch transformer framework is proposed to apply self-attention and cross-attention for source/target feature learning and source-target domain alignment.
arXiv Detail & Related papers (2021-09-13T17:59:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.