Speaker and Time-aware Joint Contextual Learning for Dialogue-act
Classification in Counselling Conversations
- URL: http://arxiv.org/abs/2111.06647v1
- Date: Fri, 12 Nov 2021 10:30:30 GMT
- Title: Speaker and Time-aware Joint Contextual Learning for Dialogue-act
Classification in Counselling Conversations
- Authors: Ganeshan Malhotra, Abdul Waheed, Aseem Srivastava, Md Shad Akhtar,
Tanmoy Chakraborty
- Abstract summary: We develop a novel dataset, named HOPE, to provide a platform for the dialogue-act classification in counselling conversations.
We collect 12.9K utterances from publicly-available counselling session videos on YouTube, extract their transcripts, clean, and annotate them with DAC labels.
We propose SPARTA, a transformer-based architecture with a novel speaker- and time-aware contextual learning for the dialogue-act classification.
- Score: 15.230185998553159
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The onset of the COVID-19 pandemic has brought the mental health of people
under risk. Social counselling has gained remarkable significance in this
environment. Unlike general goal-oriented dialogues, a conversation between a
patient and a therapist is considerably implicit, though the objective of the
conversation is quite apparent. In such a case, understanding the intent of the
patient is imperative in providing effective counselling in therapy sessions,
and the same applies to a dialogue system as well. In this work, we take
forward a small but an important step in the development of an automated
dialogue system for mental-health counselling. We develop a novel dataset,
named HOPE, to provide a platform for the dialogue-act classification in
counselling conversations. We identify the requirement of such conversation and
propose twelve domain-specific dialogue-act (DAC) labels. We collect 12.9K
utterances from publicly-available counselling session videos on YouTube,
extract their transcripts, clean, and annotate them with DAC labels. Further,
we propose SPARTA, a transformer-based architecture with a novel speaker- and
time-aware contextual learning for the dialogue-act classification. Our
evaluation shows convincing performance over several baselines, achieving
state-of-the-art on HOPE. We also supplement our experiments with extensive
empirical and qualitative analyses of SPARTA.
Related papers
- Data Augmentation of Multi-turn Psychological Dialogue via Knowledge-driven Progressive Thought Prompting [46.919537239016734]
Large language models (LLMs) have simplified the implementation of multi-turn dialogues.
It remains challenging to deliver satisfactory performance in low-resource domain, like psychological dialogue dialogue.
We propose a knowledge-driven progressive thought prompting method to guide LLM to generate psychology-related dialogue.
arXiv Detail & Related papers (2024-06-24T12:02:56Z) - Context Does Matter: Implications for Crowdsourced Evaluation Labels in Task-Oriented Dialogue Systems [57.16442740983528]
Crowdsourced labels play a crucial role in evaluating task-oriented dialogue systems.
Previous studies suggest using only a portion of the dialogue context in the annotation process.
This study investigates the influence of dialogue context on annotation quality.
arXiv Detail & Related papers (2024-04-15T17:56:39Z) - Can Large Language Models be Used to Provide Psychological Counselling?
An Analysis of GPT-4-Generated Responses Using Role-play Dialogues [0.0]
Mental health care poses an increasingly serious challenge to modern societies.
This study collected counseling dialogue data via role-playing scenarios involving expert counselors.
Third-party counselors evaluated the appropriateness of responses from human counselors and those generated by GPT-4 in identical contexts.
arXiv Detail & Related papers (2024-02-20T06:05:36Z) - SMILE: Single-turn to Multi-turn Inclusive Language Expansion via
ChatGPT for Mental Health Support [28.370263099251638]
We introduce SMILE, a single-turn to multi-turn inclusive language expansion technique that prompts ChatGPT to rewrite public single-turn dialogues into multi-turn ones.
We generate a large-scale, diverse, and high-quality dialogue dataset named SmileChat comprising 55,165 dialogues in total with an average of 10.4 turns per dialogue.
To better assess the overall quality of SmileChat, we collect a real-life chat dataset comprising 82 counseling dialogues for model evaluation.
arXiv Detail & Related papers (2023-04-30T11:26:10Z) - Response-act Guided Reinforced Dialogue Generation for Mental Health
Counseling [25.524804770124145]
We present READER, a dialogue-act guided response generator for mental health counseling conversations.
READER is built on transformer to jointly predict a potential dialogue-act d(t+1) for the next utterance (aka response-act) and to generate an appropriate response u(t+1)
We evaluate READER on HOPE, a benchmark counseling conversation dataset.
arXiv Detail & Related papers (2023-01-30T08:53:35Z) - "How Robust r u?": Evaluating Task-Oriented Dialogue Systems on Spoken
Conversations [87.95711406978157]
This work presents a new benchmark on spoken task-oriented conversations.
We study multi-domain dialogue state tracking and knowledge-grounded dialogue modeling.
Our data set enables speech-based benchmarking of task-oriented dialogue systems.
arXiv Detail & Related papers (2021-09-28T04:51:04Z) - Is this Dialogue Coherent? Learning from Dialogue Acts and Entities [82.44143808977209]
We create the Switchboard Coherence (SWBD-Coh) corpus, a dataset of human-human spoken dialogues annotated with turn coherence ratings.
Our statistical analysis of the corpus indicates how turn coherence perception is affected by patterns of distribution of entities.
We find that models combining both DA and entity information yield the best performances both for response selection and turn coherence rating.
arXiv Detail & Related papers (2020-06-17T21:02:40Z) - Contextual Dialogue Act Classification for Open-Domain Conversational
Agents [10.576497782941697]
Classifying the general intent of the user utterance in a conversation, also known as Dialogue Act (DA), is a key step in Natural Language Understanding (NLU) for conversational agents.
We propose CDAC (Contextual Dialogue Act), a simple yet effective deep learning approach for contextual dialogue act classification.
We use transfer learning to adapt models trained on human-human conversations to predict dialogue acts in human-machine dialogues.
arXiv Detail & Related papers (2020-05-28T06:48:10Z) - You Impress Me: Dialogue Generation via Mutual Persona Perception [62.89449096369027]
The research in cognitive science suggests that understanding is an essential signal for a high-quality chit-chat conversation.
Motivated by this, we propose P2 Bot, a transmitter-receiver based framework with the aim of explicitly modeling understanding.
arXiv Detail & Related papers (2020-04-11T12:51:07Z) - Attention over Parameters for Dialogue Systems [69.48852519856331]
We learn a dialogue system that independently parameterizes different dialogue skills, and learns to select and combine each of them through Attention over Parameters (AoP)
The experimental results show that this approach achieves competitive performance on a combined dataset of MultiWOZ, In-Car Assistant, and Persona-Chat.
arXiv Detail & Related papers (2020-01-07T03:10:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.