A Joint and Domain-Adaptive Approach to Spoken Language Understanding
- URL: http://arxiv.org/abs/2107.11768v1
- Date: Sun, 25 Jul 2021 09:38:42 GMT
- Title: A Joint and Domain-Adaptive Approach to Spoken Language Understanding
- Authors: Linhao Zhang, Yu Shi, Linjun Shou, Ming Gong, Houfeng Wang, Michael
Zeng
- Abstract summary: Spoken Language Understanding (SLU) is composed of two subtasks: intent detection (ID) and slot filling (SF)
One jointly tackles these two subtasks to improve their prediction accuracy, and the other focuses on the domain-adaptation ability of one of the subtasks.
In this paper, we propose a joint and domain adaptive approach to SLU.
- Score: 30.164751046395573
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Spoken Language Understanding (SLU) is composed of two subtasks: intent
detection (ID) and slot filling (SF). There are two lines of research on SLU.
One jointly tackles these two subtasks to improve their prediction accuracy,
and the other focuses on the domain-adaptation ability of one of the subtasks.
In this paper, we attempt to bridge these two lines of research and propose a
joint and domain adaptive approach to SLU. We formulate SLU as a constrained
generation task and utilize a dynamic vocabulary based on domain-specific
ontology. We conduct experiments on the ASMixed and MTOD datasets and achieve
competitive performance with previous state-of-the-art joint models. Besides,
results show that our joint model can be effectively adapted to a new domain.
Related papers
- Towards Spoken Language Understanding via Multi-level Multi-grained Contrastive Learning [50.1035273069458]
Spoken language understanding (SLU) is a core task in task-oriented dialogue systems.
We propose a multi-level MMCL framework to apply contrastive learning at three levels, including utterance level, slot level, and word level.
Our framework achieves new state-of-the-art results on two public multi-intent SLU datasets.
arXiv Detail & Related papers (2024-05-31T14:34:23Z) - HIT-SCIR at MMNLU-22: Consistency Regularization for Multilingual Spoken
Language Understanding [56.756090143062536]
We propose to use consistency regularization based on a hybrid data augmentation strategy.
We conduct experiments on the MASSIVE dataset under both full-dataset and zero-shot settings.
Our proposed method improves the performance on both intent detection and slot filling tasks.
arXiv Detail & Related papers (2023-01-05T11:21:15Z) - Tackling Long-Tailed Category Distribution Under Domain Shifts [50.21255304847395]
Existing approaches cannot handle the scenario where both issues exist.
We designed three novel core functional blocks including Distribution Calibrated Classification Loss, Visual-Semantic Mapping and Semantic-Similarity Guided Augmentation.
Two new datasets were proposed for this problem, named AWA2-LTS and ImageNet-LTS.
arXiv Detail & Related papers (2022-07-20T19:07:46Z) - Source-Free Open Compound Domain Adaptation in Semantic Segmentation [99.82890571842603]
In SF-OCDA, only the source pre-trained model and the target data are available to learn the target model.
We propose the Cross-Patch Style Swap (CPSS) to diversify samples with various patch styles in the feature-level.
Our method produces state-of-the-art results on the C-Driving dataset.
arXiv Detail & Related papers (2021-06-07T08:38:41Z) - A Survey on Spoken Language Understanding: Recent Advances and New
Frontiers [35.59678070422133]
Spoken Language Understanding (SLU) aims to extract the semantics frame of user queries.
With the burst of deep neural networks and the evolution of pre-trained language models, the research of SLU has obtained significant breakthroughs.
arXiv Detail & Related papers (2021-03-04T15:22:00Z) - Meta learning to classify intent and slot labels with noisy few shot
examples [11.835266162072486]
Spoken language understanding (SLU) models are notorious for being data-hungry.
We propose a new SLU benchmarking task: few-shot robust SLU, where SLU comprises two core problems, intent classification (IC) and slot labeling (SL)
We show the model consistently outperforms the conventional fine-tuning baseline and another popular meta-learning method, Model-Agnostic Meta-Learning (MAML), in terms of achieving better IC accuracy and SL F1.
arXiv Detail & Related papers (2020-11-30T18:53:30Z) - A Co-Interactive Transformer for Joint Slot Filling and Intent Detection [61.109486326954205]
Intent detection and slot filling are two main tasks for building a spoken language understanding (SLU) system.
Previous studies either model the two tasks separately or only consider the single information flow from intent to slot.
We propose a Co-Interactive Transformer to consider the cross-impact between the two tasks simultaneously.
arXiv Detail & Related papers (2020-10-08T10:16:52Z) - SPLAT: Speech-Language Joint Pre-Training for Spoken Language
Understanding [61.02342238771685]
Spoken language understanding requires a model to analyze input acoustic signal to understand its linguistic content and make predictions.
Various pre-training methods have been proposed to learn rich representations from large-scale unannotated speech and text.
We propose a novel semi-supervised learning framework, SPLAT, to jointly pre-train the speech and language modules.
arXiv Detail & Related papers (2020-10-05T19:29:49Z) - Dual Learning for Semi-Supervised Natural Language Understanding [29.692288627633374]
Natural language understanding (NLU) converts sentences into structured semantic forms.
We introduce a dual task of NLU, semantic-to-sentence generation (SSG)
We propose a new framework for semi-supervised NLU with the corresponding dual model.
arXiv Detail & Related papers (2020-04-26T07:17:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.