ARTA: Collection and Classification of Ambiguous Requests and Thoughtful
Actions
- URL: http://arxiv.org/abs/2106.07999v1
- Date: Tue, 15 Jun 2021 09:28:39 GMT
- Title: ARTA: Collection and Classification of Ambiguous Requests and Thoughtful
Actions
- Authors: Shohei Tanaka, Koichiro Yoshino, Katsuhito Sudoh, Satoshi Nakamura
- Abstract summary: Human-assisting systems must take thoughtful, appropriate actions for ambiguous user requests.
We develop a model that classifies ambiguous user requests into corresponding system actions.
Experiments show that the PU learning method achieved better performance than the general positive/negative learning method.
- Score: 35.557857101679296
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human-assisting systems such as dialogue systems must take thoughtful,
appropriate actions not only for clear and unambiguous user requests, but also
for ambiguous user requests, even if the users themselves are not aware of
their potential requirements. To construct such a dialogue agent, we collected
a corpus and developed a model that classifies ambiguous user requests into
corresponding system actions. In order to collect a high-quality corpus, we
asked workers to input antecedent user requests whose pre-defined actions could
be regarded as thoughtful. Although multiple actions could be identified as
thoughtful for a single user request, annotating all combinations of user
requests and system actions is impractical. For this reason, we fully annotated
only the test data and left the annotation of the training data incomplete. In
order to train the classification model on such training data, we applied the
positive/unlabeled (PU) learning method, which assumes that only a part of the
data is labeled with positive examples. The experimental results show that the
PU learning method achieved better performance than the general
positive/negative (PN) learning method to classify thoughtful actions given an
ambiguous user request.
Related papers
- The Art of Saying No: Contextual Noncompliance in Language Models [123.383993700586]
We introduce a comprehensive taxonomy of contextual noncompliance describing when and how models should not comply with user requests.
Our taxonomy spans a wide range of categories including incomplete, unsupported, indeterminate, and humanizing requests.
To test noncompliance capabilities of language models, we use this taxonomy to develop a new evaluation suite of 1000 noncompliance prompts.
arXiv Detail & Related papers (2024-07-02T07:12:51Z) - What is wrong with you?: Leveraging User Sentiment for Automatic Dialog
Evaluation [73.03318027164605]
We propose to use information that can be automatically extracted from the next user utterance as a proxy to measure the quality of the previous system response.
Our model generalizes across both spoken and written open-domain dialog corpora collected from real and paid users.
arXiv Detail & Related papers (2022-03-25T22:09:52Z) - UserIdentifier: Implicit User Representations for Simple and Effective
Personalized Sentiment Analysis [36.162520010250056]
We propose UserIdentifier, a novel scheme for training a single shared model for all users.
Our approach produces personalized responses by adding fixed, non-trainable user identifiers to the input data.
arXiv Detail & Related papers (2021-10-01T00:21:33Z) - Out-of-Scope Intent Detection with Self-Supervision and Discriminative
Training [20.242645823965145]
Out-of-scope intent detection is of practical importance in task-oriented dialogue systems.
We propose a method to train an out-of-scope intent classifier in a fully end-to-end manner by simulating the test scenario in training.
We evaluate our method extensively on four benchmark dialogue datasets and observe significant improvements over state-of-the-art approaches.
arXiv Detail & Related papers (2021-06-16T08:17:18Z) - Pre-training for low resource speech-to-intent applications [26.093156590824076]
We discuss a user-taught speech-to-intent (S2I) system in this paper.
The user-taught system learns from scratch from the users' spoken input with action demonstration.
In this paper we combine the encoder of an end-to-end ASR system with the prior NMF/capsule network-based user-taught decoder.
arXiv Detail & Related papers (2021-03-30T20:44:29Z) - Federated Learning of User Authentication Models [69.93965074814292]
We propose Federated User Authentication (FedUA), a framework for privacy-preserving training of machine learning models.
FedUA adopts federated learning framework to enable a group of users to jointly train a model without sharing the raw inputs.
We show our method is privacy-preserving, scalable with number of users, and allows new users to be added to training without changing the output layer.
arXiv Detail & Related papers (2020-07-09T08:04:38Z) - Large-scale Hybrid Approach for Predicting User Satisfaction with
Conversational Agents [28.668681892786264]
Measuring user satisfaction level is a challenging task, and a critical component in developing large-scale conversational agent systems.
Human annotation based approaches are easier to control, but hard to scale.
A novel alternative approach is to collect user's direct feedback via a feedback elicitation system embedded to the conversational agent system.
arXiv Detail & Related papers (2020-05-29T16:29:09Z) - Seamlessly Unifying Attributes and Items: Conversational Recommendation
for Cold-Start Users [111.28351584726092]
We consider the conversational recommendation for cold-start users, where a system can both ask the attributes from and recommend items to a user interactively.
Our Conversational Thompson Sampling (ConTS) model holistically solves all questions in conversational recommendation by choosing the arm with the maximal reward to play.
arXiv Detail & Related papers (2020-05-23T08:56:37Z) - Empowering Active Learning to Jointly Optimize System and User Demands [70.66168547821019]
We propose a new active learning approach that jointly optimize the active learning system (training efficiently) and the user (receiving useful instances)
We study our approach in an educational application, which particularly benefits from this technique as the system needs to rapidly learn to predict the appropriateness of an exercise to a particular user.
We evaluate multiple learning strategies and user types with data from real users and find that our joint approach better satisfies both objectives when alternative methods lead to many unsuitable exercises for end users.
arXiv Detail & Related papers (2020-05-09T16:02:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.