Facilitating reflection in teletandem through automatically generated
conversation metrics and playback video
- URL: http://arxiv.org/abs/2111.08788v2
- Date: Thu, 18 Nov 2021 13:49:01 GMT
- Title: Facilitating reflection in teletandem through automatically generated
conversation metrics and playback video
- Authors: Aparajita Dey-Plissonneau, Hyowon Lee, Michael Scriney, Alan F.
Smeaton, Vincent Pradier, Hamza Riaz
- Abstract summary: This pilot study focuses on a tool that allows second language (L2) learners to visualise and analyse their Zoom interactions with native speakers.
L2L uses the Zoom transcript to automatically generate conversation metrics and its playback feature with timestamps allows students to replay any chosen portion of the conversation for post-session reflection and self-review.
- Score: 4.014717876643502
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This pilot study focuses on a tool called L2L that allows second language
(L2) learners to visualise and analyse their Zoom interactions with native
speakers. L2L uses the Zoom transcript to automatically generate conversation
metrics and its playback feature with timestamps allows students to replay any
chosen portion of the conversation for post-session reflection and self-review.
This exploratory study investigates a seven-week teletandem project, where
undergraduate students from an Irish University learning French (B2) interacted
with their peers from a French University learning English (B2+) via Zoom. The
data collected from a survey (N=43) and semi-structured interviews (N=35) show
that the quantitative conversation metrics and qualitative review of the
synchronous content helped raise students' confidence levels while engaging
with native speakers. Furthermore, it allowed them to set tangible goals to
improve their participation, and be more aware of what, why and how they are
learning.
Related papers
- Multilingual Needle in a Haystack: Investigating Long-Context Behavior of Multilingual Large Language Models [22.859955360764275]
We introduce the MultiLingual Needle-in-a-Haystack (MLNeedle) test to assess a model's ability to retrieve relevant information.
We evaluate four state-of-the-art large language models on MLNeedle.
arXiv Detail & Related papers (2024-08-19T17:02:06Z) - A Tale of Two Languages: Large-Vocabulary Continuous Sign Language Recognition from Spoken Language Supervision [74.972172804514]
We introduce a multi-task Transformer model, CSLR2, that is able to ingest a signing sequence and output in a joint embedding space between signed language and spoken language text.
New dataset annotations provide continuous sign-level annotations for six hours of test videos, and will be made publicly available.
Our model significantly outperforms the previous state of the art on both tasks.
arXiv Detail & Related papers (2024-05-16T17:19:06Z) - TouchStone: Evaluating Vision-Language Models by Language Models [91.69776377214814]
We propose an evaluation method that uses strong large language models as judges to comprehensively evaluate the various abilities of LVLMs.
We construct a comprehensive visual dialogue dataset TouchStone, consisting of open-world images and questions, covering five major categories of abilities and 27 subtasks.
We demonstrate that powerful LVLMs, such as GPT-4, can effectively score dialogue quality by leveraging their textual capabilities alone.
arXiv Detail & Related papers (2023-08-31T17:52:04Z) - AutoConv: Automatically Generating Information-seeking Conversations
with Large Language Models [74.10293412011455]
We propose AutoConv for synthetic conversation generation.
Specifically, we formulate the conversation generation problem as a language modeling task.
We finetune an LLM with a few human conversations to capture the characteristics of the information-seeking process.
arXiv Detail & Related papers (2023-08-12T08:52:40Z) - SLABERT Talk Pretty One Day: Modeling Second Language Acquisition with
BERT [0.0]
Cross-linguistic transfer is the influence of linguistic structure of a speaker's native language on the successful acquisition of a foreign language.
We find that NLP literature has not given enough attention to the phenomenon of negative transfer.
Our findings call for further research using our novel Transformer-based SLA models.
arXiv Detail & Related papers (2023-05-31T06:22:07Z) - Cue-CoT: Chain-of-thought Prompting for Responding to In-depth Dialogue
Questions with LLMs [59.74002011562726]
We propose a novel linguistic cue-based chain-of-thoughts (textitCue-CoT) to provide a more personalized and engaging response.
We build a benchmark with in-depth dialogue questions, consisting of 6 datasets in both Chinese and English.
Empirical results demonstrate our proposed textitCue-CoT method outperforms standard prompting methods in terms of both textithelpfulness and textitacceptability on all datasets.
arXiv Detail & Related papers (2023-05-19T16:27:43Z) - Efficiently Aligned Cross-Lingual Transfer Learning for Conversational
Tasks using Prompt-Tuning [98.60739735409243]
Cross-lingual transfer of language models trained on high-resource languages like English has been widely studied for many NLP tasks.
We introduce XSGD for cross-lingual alignment pretraining, a parallel and large-scale multilingual conversation dataset.
To facilitate aligned cross-lingual representations, we develop an efficient prompt-tuning-based method for learning alignment prompts.
arXiv Detail & Related papers (2023-04-03T18:46:01Z) - Analysis of Individual Conversational Volatility in Tandem
Telecollaboration for Second Language Learning [7.767962338247332]
Students are grouped into video conference calls while learning the native language of other student(s) on the calls.
This places students in an online environment where the more outgoing can actively contribute and engage in dialogue.
We have built and deployed the L2L system which records timings of conversational utterances from all participants in a call.
We present an analysis of conversational volatility measures for a sample of 19 individual English-speaking students from our University who are learning Frenchm, in each of 86 tandem telecollaboration calls over one teaching semester.
arXiv Detail & Related papers (2022-06-28T12:34:00Z) - Watch and Learn: Mapping Language and Noisy Real-world Videos with
Self-supervision [54.73758942064708]
We teach machines to understand visuals and natural language by learning the mapping between sentences and noisy video snippets without explicit annotations.
For training and evaluation, we contribute a new dataset ApartmenTour' that contains a large number of online videos and subtitles.
arXiv Detail & Related papers (2020-11-19T03:43:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.