CoRec: An Easy Approach for Coordination Recognition
- URL: http://arxiv.org/abs/2311.18712v1
- Date: Thu, 30 Nov 2023 17:11:27 GMT
- Title: CoRec: An Easy Approach for Coordination Recognition
- Authors: Qing Wang, Haojie Jia, Wenfei Song, Qi Li
- Abstract summary: We propose a pipeline model COordination RECognizer (CoRec)
It consists of two components: coordinator and conjunct boundary detector.
Experiments show that CoRec positively impacts downstream tasks, improving the yield of state-of-the-art Open IE models.
- Score: 8.618336635685859
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we observe and address the challenges of the coordination
recognition task. Most existing methods rely on syntactic parsers to identify
the coordinators in a sentence and detect the coordination boundaries. However,
state-of-the-art syntactic parsers are slow and suffer from errors, especially
for long and complicated sentences. To better solve the problems, we propose a
pipeline model COordination RECognizer (CoRec). It consists of two components:
coordinator identifier and conjunct boundary detector. The experimental results
on datasets from various domains demonstrate the effectiveness and efficiency
of the proposed method. Further experiments show that CoRec positively impacts
downstream tasks, improving the yield of state-of-the-art Open IE models.
Related papers
- CoSD: Collaborative Stance Detection with Contrastive Heterogeneous Topic Graph Learning [18.75039816544345]
We present a novel collaborative stance detection framework called (CoSD)
CoSD learns topic-aware semantics and collaborative signals among texts, topics, and stance labels.
Experiments on two benchmark datasets demonstrate the state-of-the-art detection performance of CoSD.
arXiv Detail & Related papers (2024-04-26T02:04:05Z) - Incentivized Collaboration in Active Learning [17.972077492741928]
In collaborative active learning, where multiple agents try to learn labels from a common hypothesis, we introduce an innovative framework for incentivized collaboration.
We focus on designing (strict) individually rational (IR) collaboration protocols, ensuring that agents cannot reduce their expected label complexity by acting individually.
arXiv Detail & Related papers (2023-11-01T03:17:39Z) - PSDiff: Diffusion Model for Person Search with Iterative and
Collaborative Refinement [59.6260680005195]
We present a novel Person Search framework based on the Diffusion model, PSDiff.
PSDiff formulates the person search as a dual denoising process from noisy boxes and ReID embeddings to ground truths.
Following the new paradigm, we further design a new Collaborative Denoising Layer (CDL) to optimize detection and ReID sub-tasks in an iterative and collaborative way.
arXiv Detail & Related papers (2023-09-20T08:16:39Z) - Re-mine, Learn and Reason: Exploring the Cross-modal Semantic
Correlations for Language-guided HOI detection [57.13665112065285]
Human-Object Interaction (HOI) detection is a challenging computer vision task.
We present a framework that enhances HOI detection by incorporating structured text knowledge.
arXiv Detail & Related papers (2023-07-25T14:20:52Z) - ECO-TR: Efficient Correspondences Finding Via Coarse-to-Fine Refinement [80.94378602238432]
We propose an efficient structure named Correspondence Efficient Transformer (ECO-TR) by finding correspondences in a coarse-to-fine manner.
To achieve this, multiple transformer blocks are stage-wisely connected to gradually refine the predicted coordinates.
Experiments on various sparse and dense matching tasks demonstrate the superiority of our method in both efficiency and effectiveness against existing state-of-the-arts.
arXiv Detail & Related papers (2022-09-25T13:05:33Z) - ReAct: Temporal Action Detection with Relational Queries [84.76646044604055]
This work aims at advancing temporal action detection (TAD) using an encoder-decoder framework with action queries.
We first propose a relational attention mechanism in the decoder, which guides the attention among queries based on their relations.
Lastly, we propose to predict the localization quality of each action query at inference in order to distinguish high-quality queries.
arXiv Detail & Related papers (2022-07-14T17:46:37Z) - RACA: Relation-Aware Credit Assignment for Ad-Hoc Cooperation in
Multi-Agent Deep Reinforcement Learning [55.55009081609396]
We propose a novel method, called Relation-Aware Credit Assignment (RACA), which achieves zero-shot generalization in ad-hoc cooperation scenarios.
RACA takes advantage of a graph-based encoder relation to encode the topological structure between agents.
Our method outperforms baseline methods on the StarCraftII micromanagement benchmark and ad-hoc cooperation scenarios.
arXiv Detail & Related papers (2022-06-02T03:39:27Z) - Context-Aware Sparse Deep Coordination Graphs [20.582393720212547]
Learning sparse coordination graphs adaptive to the coordination dynamics among agents is a long-standing problem in cooperative multi-agent learning.
This paper proposes several value-based and observation-based schemes for learning dynamic topologies and evaluating them on a new Multi-Agent COordination (MACO) benchmark.
By analyzing the individual advantages of each learning scheme on each type of problem and their overall performance, we propose a novel method using the variance of utility difference functions to learn context-aware sparse coordination topologies.
arXiv Detail & Related papers (2021-06-05T12:59:03Z) - Cascaded Human-Object Interaction Recognition [175.60439054047043]
We introduce a cascade architecture for a multi-stage, coarse-to-fine HOI understanding.
At each stage, an instance localization network progressively refines HOI proposals and feeds them into an interaction recognition network.
With our carefully-designed human-centric relation features, these two modules work collaboratively towards effective interaction understanding.
arXiv Detail & Related papers (2020-03-09T17:05:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.