Joint Intent Detection And Slot Filling Based on Continual Learning
Model
- URL: http://arxiv.org/abs/2102.10905v1
- Date: Mon, 22 Feb 2021 11:10:35 GMT
- Title: Joint Intent Detection And Slot Filling Based on Continual Learning
Model
- Authors: Yanfei Hui, Jianzong Wang, Ning Cheng, Fengying Yu, Tianbo Wu, Jing
Xiao
- Abstract summary: A Continual Learning Interrelated Model (CLIM) is proposed to consider semantic information with different characteristics.
The experimental results show that CLIM achieves on slot filling and intent detection on ATIS and Snips.
- Score: 18.961950574045648
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Slot filling and intent detection have become a significant theme in the
field of natural language understanding. Even though slot filling is
intensively associated with intent detection, the characteristics of the
information required for both tasks are different while most of those
approaches may not fully aware of this problem. In addition, balancing the
accuracy of two tasks effectively is an inevitable problem for the joint
learning model. In this paper, a Continual Learning Interrelated Model (CLIM)
is proposed to consider semantic information with different characteristics and
balance the accuracy between intent detection and slot filling effectively. The
experimental results show that CLIM achieves state-of-the-art performace on
slot filling and intent detection on ATIS and Snips.
Related papers
- Gaussian Mixture Models for Affordance Learning using Bayesian Networks [50.18477618198277]
Affordances are fundamental descriptors of relationships between actions, objects and effects.
This paper approaches the problem of an embodied agent exploring the world and learning these affordances autonomously from its sensory experiences.
arXiv Detail & Related papers (2024-02-08T22:05:45Z) - MISCA: A Joint Model for Multiple Intent Detection and Slot Filling with
Intent-Slot Co-Attention [9.414164374919029]
Recent advanced approaches, which are joint models based on graphs, might still face two potential issues.
We propose a joint model named MISCA.
Our MISCA introduces an intent-slot co-attention mechanism and an underlying layer of label attention mechanism.
arXiv Detail & Related papers (2023-12-10T03:38:41Z) - Joint Multiple Intent Detection and Slot Filling with Supervised
Contrastive Learning and Self-Distillation [4.123763595394021]
Multiple intent detection and slot filling are fundamental and crucial tasks in spoken language understanding.
Joint models that can detect intents and extract slots simultaneously are preferred.
We present a method for multiple intent detection and slot filling by addressing these challenges.
arXiv Detail & Related papers (2023-08-28T15:36:33Z) - Slot Induction via Pre-trained Language Model Probing and Multi-level
Contrastive Learning [62.839109775887025]
Slot Induction (SI) task whose objective is to induce slot boundaries without explicit knowledge of token-level slot annotations.
We propose leveraging Unsupervised Pre-trained Language Model (PLM) Probing and Contrastive Learning mechanism to exploit unsupervised semantic knowledge extracted from PLM.
Our approach is shown to be effective in SI task and capable of bridging the gaps with token-level supervised models on two NLU benchmark datasets.
arXiv Detail & Related papers (2023-08-09T05:08:57Z) - Joint Salient Object Detection and Camouflaged Object Detection via
Uncertainty-aware Learning [47.253370009231645]
We introduce an uncertainty-aware learning pipeline to explore the contradictory information of salient object detection (SOD) and camouflaged object detection (COD)
Our solution leads to both state-of-the-art performance and informative uncertainty estimation.
arXiv Detail & Related papers (2023-07-10T15:49:37Z) - Semantics-Depth-Symbiosis: Deeply Coupled Semi-Supervised Learning of
Semantics and Depth [83.94528876742096]
We tackle the MTL problem of two dense tasks, ie, semantic segmentation and depth estimation, and present a novel attention module called Cross-Channel Attention Module (CCAM)
In a true symbiotic spirit, we then formulate a novel data augmentation for the semantic segmentation task using predicted depth called AffineMix, and a simple depth augmentation using predicted semantics called ColorAug.
Finally, we validate the performance gain of the proposed method on the Cityscapes dataset, which helps us achieve state-of-the-art results for a semi-supervised joint model based on depth and semantic
arXiv Detail & Related papers (2022-06-21T17:40:55Z) - Bi-directional Joint Neural Networks for Intent Classification and Slot
Filling [5.3361357265365035]
We propose a bi-directional joint model for intent classification and slot filling.
Our model achieves state-of-the-art results on intent classification accuracy, slot filling F1, and significantly improves sentence-level semantic frame accuracy.
arXiv Detail & Related papers (2022-02-26T06:35:21Z) - Few-Shot Fine-Grained Action Recognition via Bidirectional Attention and
Contrastive Meta-Learning [51.03781020616402]
Fine-grained action recognition is attracting increasing attention due to the emerging demand of specific action understanding in real-world applications.
We propose a few-shot fine-grained action recognition problem, aiming to recognize novel fine-grained actions with only few samples given for each class.
Although progress has been made in coarse-grained actions, existing few-shot recognition methods encounter two issues handling fine-grained actions.
arXiv Detail & Related papers (2021-08-15T02:21:01Z) - Generalized Zero-shot Intent Detection via Commonsense Knowledge [5.398580049917152]
We propose RIDE: an intent detection model that leverages commonsense knowledge in an unsupervised fashion to overcome the issue of training data scarcity.
RIDE computes robust and generalizable relationship meta-features that capture deep semantic relationships between utterances and intent labels.
Our extensive experimental analysis on three widely-used intent detection benchmarks shows that relationship meta-features significantly increase the accuracy of detecting both seen and unseen intents.
arXiv Detail & Related papers (2021-02-04T23:36:41Z) - Robust Learning Through Cross-Task Consistency [92.42534246652062]
We propose a broadly applicable and fully computational method for augmenting learning with Cross-Task Consistency.
We observe that learning with cross-task consistency leads to more accurate predictions and better generalization to out-of-distribution inputs.
arXiv Detail & Related papers (2020-06-07T09:24:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.