Example-Driven Intent Prediction with Observers
- URL: http://arxiv.org/abs/2010.08684v2
- Date: Tue, 25 May 2021 02:09:16 GMT
- Title: Example-Driven Intent Prediction with Observers
- Authors: Shikib Mehri and Mihail Eric
- Abstract summary: We focus on the intent classification problem which aims to identify user intents given utterances addressed to the dialog system.
We propose two approaches for improving the generalizability of utterance classification models: (1) observers and (2) example-driven training.
- Score: 15.615065041164629
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A key challenge of dialog systems research is to effectively and efficiently
adapt to new domains. A scalable paradigm for adaptation necessitates the
development of generalizable models that perform well in few-shot settings. In
this paper, we focus on the intent classification problem which aims to
identify user intents given utterances addressed to the dialog system. We
propose two approaches for improving the generalizability of utterance
classification models: (1) observers and (2) example-driven training. Prior
work has shown that BERT-like models tend to attribute a significant amount of
attention to the [CLS] token, which we hypothesize results in diluted
representations. Observers are tokens that are not attended to, and are an
alternative to the [CLS] token as a semantic representation of utterances.
Example-driven training learns to classify utterances by comparing to examples,
thereby using the underlying encoder as a sentence similarity model. These
methods are complementary; improving the representation through observers
allows the example-driven model to better measure sentence similarities. When
combined, the proposed methods attain state-of-the-art results on three intent
prediction datasets (\textsc{banking77}, \textsc{clinc150}, \textsc{hwu64}) in
both the full data and few-shot (10 examples per intent) settings. Furthermore,
we demonstrate that the proposed approach can transfer to new intents and
across datasets without any additional training.
Related papers
- Prefer to Classify: Improving Text Classifiers via Auxiliary Preference
Learning [76.43827771613127]
In this paper, we investigate task-specific preferences between pairs of input texts as a new alternative way for such auxiliary data annotation.
We propose a novel multi-task learning framework, called prefer-to-classify (P2C), which can enjoy the cooperative effect of learning both the given classification task and the auxiliary preferences.
arXiv Detail & Related papers (2023-06-08T04:04:47Z) - Generalization Properties of Retrieval-based Models [50.35325326050263]
Retrieval-based machine learning methods have enjoyed success on a wide range of problems.
Despite growing literature showcasing the promise of these models, the theoretical underpinning for such models remains underexplored.
We present a formal treatment of retrieval-based models to characterize their generalization ability.
arXiv Detail & Related papers (2022-10-06T00:33:01Z) - An Additive Instance-Wise Approach to Multi-class Model Interpretation [53.87578024052922]
Interpretable machine learning offers insights into what factors drive a certain prediction of a black-box system.
Existing methods mainly focus on selecting explanatory input features, which follow either locally additive or instance-wise approaches.
This work exploits the strengths of both methods and proposes a global framework for learning local explanations simultaneously for multiple target classes.
arXiv Detail & Related papers (2022-07-07T06:50:27Z) - Distant finetuning with discourse relations for stance classification [55.131676584455306]
We propose a new method to extract data with silver labels from raw text to finetune a model for stance classification.
We also propose a 3-stage training framework where the noisy level in the data used for finetuning decreases over different stages.
Our approach ranks 1st among 26 competing teams in the stance classification track of the NLPCC 2021 shared task Argumentative Text Understanding for AI Debater.
arXiv Detail & Related papers (2022-04-27T04:24:35Z) - Layer-wise Analysis of a Self-supervised Speech Representation Model [26.727775920272205]
Self-supervised learning approaches have been successful for pre-training speech representation models.
Not much has been studied about the type or extent of information encoded in the pre-trained representations themselves.
arXiv Detail & Related papers (2021-07-10T02:13:25Z) - Adaptive Prototypical Networks with Label Words and Joint Representation
Learning for Few-Shot Relation Classification [17.237331828747006]
This work focuses on few-shot relation classification (FSRC)
We propose an adaptive mixture mechanism to add label words to the representation of the class prototype.
Experiments have been conducted on FewRel under different few-shot (FS) settings.
arXiv Detail & Related papers (2021-01-10T11:25:42Z) - Explaining and Improving Model Behavior with k Nearest Neighbor
Representations [107.24850861390196]
We propose using k nearest neighbor representations to identify training examples responsible for a model's predictions.
We show that kNN representations are effective at uncovering learned spurious associations.
Our results indicate that the kNN approach makes the finetuned model more robust to adversarial inputs.
arXiv Detail & Related papers (2020-10-18T16:55:25Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.