Seeking Soulmate via Voice: Understanding Promises and Challenges of
Online Synchronized Voice-Based Mobile Dating
- URL: http://arxiv.org/abs/2402.19328v1
- Date: Thu, 29 Feb 2024 16:30:07 GMT
- Title: Seeking Soulmate via Voice: Understanding Promises and Challenges of
Online Synchronized Voice-Based Mobile Dating
- Authors: Chenxinran Shen, Yan Xu, Ray LC, Zhicong Lu
- Abstract summary: We explore a non-traditional voice-based dating app called "Soul"
Unlike traditional platforms that rely heavily on profile information, Soul facilitates user interactions through voice-based communication.
Our findings indicate that the role of voice as a moderator influences impression management and shapes perceptions between the sender and the receiver of the voice.
- Score: 25.30209978159759
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Online dating has become a popular way for individuals to connect with
potential romantic partners. Many dating apps use personal profiles that
include a headshot and self-description, allowing users to present themselves
and search for compatible matches. However, this traditional model often has
limitations. In this study, we explore a non-traditional voice-based dating app
called "Soul". Unlike traditional platforms that rely heavily on profile
information, Soul facilitates user interactions through voice-based
communication. We conducted semi-structured interviews with 18 dedicated Soul
users to investigate how they engage with the platform and perceive themselves
and others in this unique dating environment. Our findings indicate that the
role of voice as a moderator influences impression management and shapes
perceptions between the sender and the receiver of the voice. Additionally, the
synchronous voice-based and community-based dating model offers benefits to
users in the Chinese cultural context. Our study contributes to understanding
the affordances introduced by voice-based interactions in online dating in
China.
Related papers
- Towards Investigating Biases in Spoken Conversational Search [10.120634413661929]
We review how biases and user attitude changes have been studied in screen-based web search.
We propose an experimental setup with variables, data, and instruments to explore biases in a voice-based setting like Spoken Conversational Search.
arXiv Detail & Related papers (2024-09-02T01:54:33Z) - Cross-Cultural Validation of Partner Models for Voice User Interfaces [30.810951137239716]
We translate, localize, and evaluate the Partner Modelling Questionnaire (PMQ) for non-English speaking Western (German) and East Asian cohorts.
We find that the scale produces equivalent levels of goodness-to-fit for both our German and Japanese translations, confirming its cross-cultural validity.
We discuss how our translations can open up critical research on cultural similarities and differences in partner model use and design.
arXiv Detail & Related papers (2024-05-15T00:00:36Z) - Affective Faces for Goal-Driven Dyadic Communication [16.72177738101024]
We introduce a video framework for modeling the association between verbal and non-verbal communication during dyadic conversation.
Our approach retrieves a video of a listener, who has facial expressions that would be socially appropriate given the context.
arXiv Detail & Related papers (2023-01-26T05:00:09Z) - Understanding How People Rate Their Conversations [73.17730062864314]
We conduct a study to better understand how people rate their interactions with conversational agents.
We focus on agreeableness and extraversion as variables that may explain variation in ratings.
arXiv Detail & Related papers (2022-06-01T00:45:32Z) - Few-shot Language Coordination by Modeling Theory of Mind [95.54446989205117]
We study the task of few-shot $textitlanguage coordination$.
We require the lead agent to coordinate with a $textitpopulation$ of agents with different linguistic abilities.
This requires the ability to model the partner's beliefs, a vital component of human communication.
arXiv Detail & Related papers (2021-07-12T19:26:11Z) - Partner Matters! An Empirical Study on Fusing Personas for Personalized
Response Selection in Retrieval-Based Chatbots [51.091235903442715]
This paper makes an attempt to explore the impact of utilizing personas that describe either self or partner speakers on the task of response selection.
Four persona fusion strategies are designed, which assume personas interact with contexts or responses in different ways.
Empirical studies on the Persona-Chat dataset show that the partner personas can improve the accuracy of response selection.
arXiv Detail & Related papers (2021-05-19T10:32:30Z) - Predicting Relationship Labels and Individual Personality Traits from
Telecommunication History in Social Networks using Hawkes Processes [5.668126716715423]
Mobile phones contain a wealth of private information, so we try to keep them secure.
We provide large-scale evidence that the psychological profiles of individuals and their relations with their peers can be predicted from seemingly anonymous communication traces.
arXiv Detail & Related papers (2020-09-04T07:24:49Z) - Vyaktitv: A Multimodal Peer-to-Peer Hindi Conversations based Dataset
for Personality Assessment [50.15466026089435]
We present a novel peer-to-peer Hindi conversation dataset- Vyaktitv.
It consists of high-quality audio and video recordings of the participants, with Hinglish textual transcriptions for each conversation.
The dataset also contains a rich set of socio-demographic features, like income, cultural orientation, amongst several others, for all the participants.
arXiv Detail & Related papers (2020-08-31T17:44:28Z) - asya: Mindful verbal communication using deep learning [0.0]
asya is a mobile application that consists of deep learning models which analyze spectra of a human voice.
Models can be applied for a variety of areas like customer service improvement, sales effective conversations, and couples therapy.
arXiv Detail & Related papers (2020-08-20T13:37:49Z) - I love your chain mail! Making knights smile in a fantasy game world:
Open-domain goal-oriented dialogue agents [69.68400056148336]
We train a goal-oriented model with reinforcement learning against an imitation-learned chit-chat'' model.
We show that both models outperform an inverse model baseline and can converse naturally with their dialogue partner in order to achieve goals.
arXiv Detail & Related papers (2020-02-07T16:22:36Z) - VoiceCoach: Interactive Evidence-based Training for Voice Modulation
Skills in Public Speaking [55.366941476863644]
The modulation of voice properties, such as pitch, volume, and speed, is crucial for delivering a successful public speech.
We present VoiceCoach, an interactive evidence-based approach to facilitate the effective training of voice modulation skills.
arXiv Detail & Related papers (2020-01-22T04:52:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.