Assessing Data Efficiency in Task-Oriented Semantic Parsing
- URL: http://arxiv.org/abs/2107.04736v1
- Date: Sat, 10 Jul 2021 02:43:16 GMT
- Title: Assessing Data Efficiency in Task-Oriented Semantic Parsing
- Authors: Shrey Desai, Akshat Shrivastava, Justin Rill, Brian Moran, Safiyyah
Saleem, Alexander Zotov, Ahmed Aly
- Abstract summary: We introduce a four-stage protocol which gives an approximate measure of how much in-domain "target" data a requires to achieve a certain quality bar.
We apply our protocol in two real-world case studies illustrating its flexibility and applicability to practitioners in task-oriented semantic parsing.
- Score: 54.87705549021248
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data efficiency, despite being an attractive characteristic, is often
challenging to measure and optimize for in task-oriented semantic parsing;
unlike exact match, it can require both model- and domain-specific setups,
which have, historically, varied widely across experiments. In our work, as a
step towards providing a unified solution to data-efficiency-related questions,
we introduce a four-stage protocol which gives an approximate measure of how
much in-domain, "target" data a parser requires to achieve a certain quality
bar. Specifically, our protocol consists of (1) sampling target subsets of
different cardinalities, (2) fine-tuning parsers on each subset, (3) obtaining
a smooth curve relating target subset (%) vs. exact match (%), and (4)
referencing the curve to mine ad-hoc (target subset, exact match) points. We
apply our protocol in two real-world case studies -- model generalizability and
intent complexity -- illustrating its flexibility and applicability to
practitioners in task-oriented semantic parsing.
Related papers
- Interpetable Target-Feature Aggregation for Multi-Task Learning based on Bias-Variance Analysis [53.38518232934096]
Multi-task learning (MTL) is a powerful machine learning paradigm designed to leverage shared knowledge across tasks to improve generalization and performance.
We propose an MTL approach at the intersection between task clustering and feature transformation based on a two-phase iterative aggregation of targets and features.
In both phases, a key aspect is to preserve the interpretability of the reduced targets and features through the aggregation with the mean, which is motivated by applications to Earth science.
arXiv Detail & Related papers (2024-06-12T08:30:16Z) - Multi-task Bias-Variance Trade-off Through Functional Constraints [102.64082402388192]
Multi-task learning aims to acquire a set of functions that perform well for diverse tasks.
In this paper we draw intuition from the two extreme learning scenarios -- a single function for all tasks, and a task-specific function that ignores the other tasks.
We introduce a constrained learning formulation that enforces domain specific solutions to a central function.
arXiv Detail & Related papers (2022-10-27T16:06:47Z) - CFNet: Learning Correlation Functions for One-Stage Panoptic
Segmentation [46.252118473248316]
We propose to first predict semantic-level and instance-level correlations among different locations that are utilized to enhance the backbone features.
We then feed the improved discriminative features into the corresponding segmentation heads, respectively.
We achieve state-of-the-art performance on MS with $45.1$% PQ and ADE20k with $32.6$% PQ.
arXiv Detail & Related papers (2022-01-13T05:31:14Z) - RETRONLU: Retrieval Augmented Task-Oriented Semantic Parsing [11.157958012672202]
We are applying retrieval-based modeling ideas to the problem of multi-domain task-oriented semantic parsing.
Our approach, RetroNLU, extends a sequence-to-sequence model architecture with a retrieval component.
We analyze the nearest neighbor retrieval component's quality, model sensitivity and break down the performance for semantic parses of different utterance complexity.
arXiv Detail & Related papers (2021-09-21T19:30:30Z) - Active Learning by Acquiring Contrastive Examples [8.266097781813656]
We propose an acquisition function that opts for selecting textitcontrastive examples, i.e. data points that are similar in the model feature space.
We compare our approach with a diverse set of acquisition functions in four natural language understanding tasks and seven datasets.
arXiv Detail & Related papers (2021-09-08T16:40:18Z) - X2Parser: Cross-Lingual and Cross-Domain Framework for Task-Oriented
Compositional Semantic Parsing [51.81533991497547]
Task-oriented compositional semantic parsing (TCSP) handles complex nested user queries.
We present X2 compared a transferable Cross-lingual and Cross-domain for TCSP.
We propose to predict flattened intents and slots representations separately and cast both prediction tasks into sequence labeling problems.
arXiv Detail & Related papers (2021-06-07T16:40:05Z) - Exploring Relational Context for Multi-Task Dense Prediction [76.86090370115]
We consider a multi-task environment for dense prediction tasks, represented by a common backbone and independent task-specific heads.
We explore various attention-based contexts, such as global and local, in the multi-task setting.
We propose an Adaptive Task-Relational Context module, which samples the pool of all available contexts for each task pair.
arXiv Detail & Related papers (2021-04-28T16:45:56Z) - Latent Space Regularization for Unsupervised Domain Adaptation in
Semantic Segmentation [14.050836886292869]
We introduce feature-level space-shaping regularization strategies to reduce the domain discrepancy in semantic segmentation.
We verify the effectiveness of such methods in the autonomous driving setting.
arXiv Detail & Related papers (2021-04-06T16:07:22Z) - The Advantage of Conditional Meta-Learning for Biased Regularization and
Fine-Tuning [50.21341246243422]
Biased regularization and fine-tuning are two recent meta-learning approaches.
We propose conditional meta-learning, inferring a conditioning function mapping task's side information into a meta- parameter vector.
We then propose a convex meta-algorithm providing a comparable advantage also in practice.
arXiv Detail & Related papers (2020-08-25T07:32:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.