Analysis of Individual Conversational Volatility in Tandem
Telecollaboration for Second Language Learning
- URL: http://arxiv.org/abs/2206.13965v1
- Date: Tue, 28 Jun 2022 12:34:00 GMT
- Title: Analysis of Individual Conversational Volatility in Tandem
Telecollaboration for Second Language Learning
- Authors: Alan F. Smeaton, Aparajita Dey-Plissonneau, Hyowon Lee, Mingming Liu,
Michael Scriney
- Abstract summary: Students are grouped into video conference calls while learning the native language of other student(s) on the calls.
This places students in an online environment where the more outgoing can actively contribute and engage in dialogue.
We have built and deployed the L2L system which records timings of conversational utterances from all participants in a call.
We present an analysis of conversational volatility measures for a sample of 19 individual English-speaking students from our University who are learning Frenchm, in each of 86 tandem telecollaboration calls over one teaching semester.
- Score: 7.767962338247332
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Second language learning can be enabled by tandem collaboration where
students are grouped into video conference calls while learning the native
language of other student(s) on the calls. This places students in an online
environment where the more outgoing can actively contribute and engage in
dialogue while those more shy and unsure of their second language skills can
sit back and coast through the calls. We have built and deployed the L2L system
which records timings of conversational utterances from all participants in a
call. We generate visualisations including participation rates and timelines
for each student in each call and present these on a dashboard. We have
recently developed a measure called personal conversational volatility for how
dynamic has been each student's contribution to the dialogue in each call. We
present an analysis of conversational volatility measures for a sample of 19
individual English-speaking students from our University who are learning
Frenchm, in each of 86 tandem telecollaboration calls over one teaching
semester. Our analysis shows there is a need to look into the nature of the
interactions and see if the choices of discussion topics assigned to them were
too difficult for some students and that may have influenced their engagement
in some way.
Related papers
- WavChat: A Survey of Spoken Dialogue Models [66.82775211793547]
Recent advancements in spoken dialogue models, exemplified by systems like GPT-4o, have captured significant attention in the speech domain.
These advanced spoken dialogue models not only comprehend audio, music, and other speech-related features, but also capture stylistic and timbral characteristics in speech.
Despite the progress in spoken dialogue systems, there is a lack of comprehensive surveys that systematically organize and analyze these systems.
arXiv Detail & Related papers (2024-11-15T04:16:45Z) - OmniFlatten: An End-to-end GPT Model for Seamless Voice Conversation [24.68804661538364]
Full spoken dialogue systems significantly mirror human-human interactions.
achieving low latency and natural interactions is a significant challenge.
End-to-end full-to-end spoken dialogue systems are a promising direction for developing efficient and natural end-to-end systems.
Audio samples of dialogues generated by OmniFlatten can be found at this web site.
arXiv Detail & Related papers (2024-10-23T11:58:58Z) - Self-Directed Turing Test for Large Language Models [56.64615470513102]
The Turing test examines whether AIs can exhibit human-like behaviour in natural language conversations.
Traditional Turing tests adopt a rigid dialogue format where each participant sends only one message each time.
This paper proposes the Self-Directed Turing Test, which extends the original test with a burst dialogue format.
arXiv Detail & Related papers (2024-08-19T09:57:28Z) - Conversations as a Source for Teaching Scientific Concepts at Different Education Levels [22.315652391541285]
This paper presents a novel source for facilitating conversational teaching of scientific concepts at various difficulty levels.
We analyse this data source in various ways to show that it offers a diverse array of examples that can be used to generate contextually appropriate responses.
arXiv Detail & Related papers (2024-04-16T11:33:36Z) - Large Language Model based Situational Dialogues for Second Language Learning [7.450328495455734]
In second language learning, scenario-based conversation practice is important for language learners to achieve fluency in speaking.
To bridge this gap, we propose situational dialogue models for students to engage in conversational practice.
Our situational dialogue models are fine-tuned on large language models (LLMs), with the aim of combining the engaging nature of an open-ended conversation with the focused practice of scenario-based tasks.
arXiv Detail & Related papers (2024-03-29T06:43:55Z) - Interactive Conversational Head Generation [68.76774230274076]
We introduce a new conversation head generation benchmark for synthesizing behaviors of a single interlocutor in a face-to-face conversation.
The capability to automatically synthesize interlocutors which can participate in long and multi-turn conversations is vital and offer benefits for various applications.
arXiv Detail & Related papers (2023-07-05T08:06:26Z) - Multi-Party Chat: Conversational Agents in Group Settings with Humans
and Models [39.80729604768669]
We evaluate the ability of language models to act as one or more characters in multi-party conversations.
We find that our new dataset, MultiLIGHT, can help bring significant improvements in the group setting.
arXiv Detail & Related papers (2023-04-26T21:41:17Z) - ERNIE-SAT: Speech and Text Joint Pretraining for Cross-Lingual
Multi-Speaker Text-to-Speech [58.93395189153713]
We extend the pretraining method for cross-lingual multi-speaker speech synthesis tasks.
We propose a speech-text joint pretraining framework, where we randomly mask the spectrogram and the phonemes.
Our model shows great improvements over speaker-embedding-based multi-speaker TTS methods.
arXiv Detail & Related papers (2022-11-07T13:35:16Z) - Facilitating reflection in teletandem through automatically generated
conversation metrics and playback video [4.014717876643502]
This pilot study focuses on a tool that allows second language (L2) learners to visualise and analyse their Zoom interactions with native speakers.
L2L uses the Zoom transcript to automatically generate conversation metrics and its playback feature with timestamps allows students to replay any chosen portion of the conversation for post-session reflection and self-review.
arXiv Detail & Related papers (2021-11-16T21:33:07Z) - Video-Grounded Dialogues with Pretrained Generation Language Models [88.15419265622748]
We leverage the power of pre-trained language models for improving video-grounded dialogue.
We propose a framework by formulating sequence-to-grounded dialogue tasks as a sequence-to-grounded task.
Our framework allows fine-tuning language models to capture dependencies across multiple modalities.
arXiv Detail & Related papers (2020-06-27T08:24:26Z) - Attention over Parameters for Dialogue Systems [69.48852519856331]
We learn a dialogue system that independently parameterizes different dialogue skills, and learns to select and combine each of them through Attention over Parameters (AoP)
The experimental results show that this approach achieves competitive performance on a combined dataset of MultiWOZ, In-Car Assistant, and Persona-Chat.
arXiv Detail & Related papers (2020-01-07T03:10:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.