A Structured Span Selector
- URL: http://arxiv.org/abs/2205.03977v3
- Date: Wed, 23 Aug 2023 05:18:04 GMT
- Title: A Structured Span Selector
- Authors: Tianyu Liu, Yuchen Eleanor Jiang, Ryan Cotterell, Mrinmaya Sachan
- Abstract summary: We propose a novel grammar-based structured span selection model.
We evaluate our model on two popular span prediction tasks: coreference resolution and semantic role labeling.
- Score: 100.0808682810258
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many natural language processing tasks, e.g., coreference resolution and
semantic role labeling, require selecting text spans and making decisions about
them. A typical approach to such tasks is to score all possible spans and
greedily select spans for task-specific downstream processing. This approach,
however, does not incorporate any inductive bias about what sort of spans ought
to be selected, e.g., that selected spans tend to be syntactic constituents. In
this paper, we propose a novel grammar-based structured span selection model
which learns to make use of the partial span-level annotation provided for such
problems. Compared to previous approaches, our approach gets rid of the
heuristic greedy span selection scheme, allowing us to model the downstream
task on an optimal set of spans. We evaluate our model on two popular span
prediction tasks: coreference resolution and semantic role labeling. We show
empirical improvements on both.
Related papers
- Beyond Contrastive Learning: A Variational Generative Model for
Multilingual Retrieval [109.62363167257664]
We propose a generative model for learning multilingual text embeddings.
Our model operates on parallel data in $N$ languages.
We evaluate this method on a suite of tasks including semantic similarity, bitext mining, and cross-lingual question retrieval.
arXiv Detail & Related papers (2022-12-21T02:41:40Z) - InstructionNER: A Multi-Task Instruction-Based Generative Framework for
Few-shot NER [31.32381919473188]
We propose a multi-task instruction-based generative framework, named InstructionNER, for low-resource named entity recognition.
Specifically, we reformulate the NER task as a generation problem, which enriches source sentences with task-specific instructions and answer options, then inferences the entities and types in natural language.
Experimental results show that our method consistently outperforms other baselines on five datasets in few-shot settings.
arXiv Detail & Related papers (2022-03-08T07:56:36Z) - Retrieve-and-Fill for Scenario-based Task-Oriented Semantic Parsing [110.4684789199555]
We introduce scenario-based semantic parsing: a variant of the original task which first requires disambiguating an utterance's "scenario"
This formulation enables us to isolate coarse-grained and fine-grained aspects of the task, each of which we solve with off-the-shelf neural modules.
Our model is modular, differentiable, interpretable, and allows us to garner extra supervision from scenarios.
arXiv Detail & Related papers (2022-02-02T08:00:21Z) - Relation-aware Video Reading Comprehension for Temporal Language
Grounding [67.5613853693704]
Temporal language grounding in videos aims to localize the temporal span relevant to the given query sentence.
This paper will formulate temporal language grounding into video reading comprehension and propose a Relation-aware Network (RaNet) to address it.
arXiv Detail & Related papers (2021-10-12T03:10:21Z) - Using Optimal Transport as Alignment Objective for fine-tuning
Multilingual Contextualized Embeddings [7.026476782041066]
We propose using Optimal Transport (OT) as an alignment objective during fine-tuning to improve multilingual contextualized representations.
This approach does not require word-alignment pairs prior to fine-tuning and instead learns the word alignments within context in an unsupervised manner.
arXiv Detail & Related papers (2021-10-06T16:13:45Z) - Few-shot Intent Classification and Slot Filling with Retrieved Examples [30.45269507626138]
We propose a span-level retrieval method that learns similar contextualized representations for spans with the same label via a novel batch-softmax objective.
Our method outperforms previous systems in various few-shot settings on the CLINC and SNIPS benchmarks.
arXiv Detail & Related papers (2021-04-12T18:50:34Z) - Dynamic Context Selection for Document-level Neural Machine Translation
via Reinforcement Learning [55.18886832219127]
We propose an effective approach to select dynamic context for document-level translation.
A novel reward is proposed to encourage the selection and utilization of dynamic context sentences.
Experiments demonstrate that our approach can select adaptive context sentences for different source sentences.
arXiv Detail & Related papers (2020-10-09T01:05:32Z) - A Cross-Task Analysis of Text Span Representations [52.28565379517174]
We find that the optimal span representation varies by task, and can also vary within different facets of individual tasks.
We also find that the choice of span representation has a bigger impact with a fixed pretrained encoder than with a fine-tuned encoder.
arXiv Detail & Related papers (2020-06-06T13:37:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.