Cross-TOP: Zero-Shot Cross-Schema Task-Oriented Parsing
- URL: http://arxiv.org/abs/2206.05352v1
- Date: Fri, 10 Jun 2022 20:50:08 GMT
- Title: Cross-TOP: Zero-Shot Cross-Schema Task-Oriented Parsing
- Authors: Melanie Rubino, Nicolas Guenon des Mesnards, Uday Shah, Nanjiang
Jiang, Weiqi Sun, Konstantine Arkoudas
- Abstract summary: Cross-TOP is a zero-shot method for complex semantic parsing in a given vertical.
We show that Cross-TOP can achieve high accuracy on a previously unseen task without requiring any additional training data.
- Score: 5.5947246682911205
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning methods have enabled task-oriented semantic parsing of
increasingly complex utterances. However, a single model is still typically
trained and deployed for each task separately, requiring labeled training data
for each, which makes it challenging to support new tasks, even within a single
business vertical (e.g., food-ordering or travel booking). In this paper we
describe Cross-TOP (Cross-Schema Task-Oriented Parsing), a zero-shot method for
complex semantic parsing in a given vertical. By leveraging the fact that user
requests from the same vertical share lexical and semantic similarities, a
single cross-schema parser is trained to service an arbitrary number of tasks,
seen or unseen, within a vertical. We show that Cross-TOP can achieve high
accuracy on a previously unseen task without requiring any additional training
data, thereby providing a scalable way to bootstrap semantic parsers for new
tasks. As part of this work we release the FoodOrdering dataset, a
task-oriented parsing dataset in the food-ordering vertical, with utterances
and annotations derived from five schemas, each from a different restaurant
menu.
Related papers
- The Power of Summary-Source Alignments [62.76959473193149]
Multi-document summarization (MDS) is a challenging task, often decomposed to subtasks of salience and redundancy detection.
alignment of corresponding sentences between a reference summary and its source documents has been leveraged to generate training data.
This paper proposes extending the summary-source alignment framework by applying it at the more fine-grained proposition span level.
arXiv Detail & Related papers (2024-06-02T19:35:19Z) - Joint-Task Regularization for Partially Labeled Multi-Task Learning [30.823282043129552]
Multi-task learning has become increasingly popular in the machine learning field, but its practicality is hindered by the need for large, labeled datasets.
We propose Joint-Task Regularization (JTR), an intuitive technique which leverages cross-task relations to simultaneously regularize all tasks in a single joint-task latent space.
arXiv Detail & Related papers (2024-04-02T14:16:59Z) - Task2Box: Box Embeddings for Modeling Asymmetric Task Relationships [19.02802837808466]
We propose Task2Box, an approach to represent tasks using box embeddings.
We show that Task2Box accurately predicts unseen hierarchical relationships between nodes in ImageNet and iNaturalist datasets.
We also show that box embeddings estimated from task representations can be used to predict relationships more accurately than classifiers trained on the same representations.
arXiv Detail & Related papers (2024-03-25T20:39:58Z) - Distribution Matching for Multi-Task Learning of Classification Tasks: a
Large-Scale Study on Faces & Beyond [62.406687088097605]
Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space.
We show that MTL can be successful with classification tasks with little, or non-overlapping annotations.
We propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching.
arXiv Detail & Related papers (2024-01-02T14:18:11Z) - tasksource: A Dataset Harmonization Framework for Streamlined NLP
Multi-Task Learning and Evaluation [2.869669835645836]
We release a dataset annotation framework and dataset annotations for more than 500 English tasks.
These annotations include metadata, such as the names of columns to be used as input or labels for all datasets.
We fine-tune a multi-task text encoder on all tasksource tasks, outperforming every publicly available text encoder of comparable size in an external evaluation.
arXiv Detail & Related papers (2023-01-14T16:38:04Z) - Improving Cross-task Generalization of Unified Table-to-text Models with
Compositional Task Configurations [63.04466647849211]
Methods typically encode task information with a simple dataset name as a prefix to the encoder.
We propose compositional task configurations, a set of prompts prepended to the encoder to improve cross-task generalization.
We show this not only allows the model to better learn shared knowledge across different tasks at training, but also allows us to control the model by composing new configurations.
arXiv Detail & Related papers (2022-12-17T02:20:14Z) - PIZZA: A new benchmark for complex end-to-end task-oriented parsing [3.5106870325869886]
This paper introduces a new dataset for parsing pizza and drink orders, whose semantics cannot be captured by flat slots and intents.
We perform an evaluation of deep-learning techniques for task-oriented parsing on this dataset, including different flavors of seq2seqNGs.
arXiv Detail & Related papers (2022-12-01T04:20:07Z) - Task Compass: Scaling Multi-task Pre-training with Task Prefix [122.49242976184617]
Existing studies show that multi-task learning with large-scale supervised tasks suffers from negative effects across tasks.
We propose a task prefix guided multi-task pre-training framework to explore the relationships among tasks.
Our model can not only serve as the strong foundation backbone for a wide range of tasks but also be feasible as a probing tool for analyzing task relationships.
arXiv Detail & Related papers (2022-10-12T15:02:04Z) - Improving Multi-task Generalization Ability for Neural Text Matching via
Prompt Learning [54.66399120084227]
Recent state-of-the-art neural text matching models (PLMs) are hard to generalize to different tasks.
We adopt a specialization-generalization training strategy and refer to it as Match-Prompt.
In specialization stage, descriptions of different matching tasks are mapped to only a few prompt tokens.
In generalization stage, text matching model explores the essential matching signals by being trained on diverse multiple matching tasks.
arXiv Detail & Related papers (2022-04-06T11:01:08Z) - X2Parser: Cross-Lingual and Cross-Domain Framework for Task-Oriented
Compositional Semantic Parsing [51.81533991497547]
Task-oriented compositional semantic parsing (TCSP) handles complex nested user queries.
We present X2 compared a transferable Cross-lingual and Cross-domain for TCSP.
We propose to predict flattened intents and slots representations separately and cast both prediction tasks into sequence labeling problems.
arXiv Detail & Related papers (2021-06-07T16:40:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.