Designing for Engaging Communication Between Parents and Young Adult Children Through Shared Music Experiences
- URL: http://arxiv.org/abs/2508.10907v1
- Date: Wed, 30 Jul 2025 16:34:44 GMT
- Title: Designing for Engaging Communication Between Parents and Young Adult Children Through Shared Music Experiences
- Authors: Euihyeok Lee, Souneil Park, Jin Yu, Seungchul Lee, Seungwoo Kang,
- Abstract summary: We develop DJ-Fam, a mobile application that enables parents and children to listen to their favorite songs and use them as conversation starters.<n>From our deployment study with seven families over four weeks in South Korea, we show the potential of DJ-Fam to influence parent-child interaction positively.
- Score: 6.329321597138646
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper aims to foster social interaction between parents and young adult children living apart via music. Our approach transforms their music-listening moment into an opportunity to listen to the other's favorite songs and enrich interaction in their daily lives. To this end, we explore the current practice and needs of parent-child communication and the experience and perception of music-mediated interaction. Based on the findings, we developed DJ-Fam, a mobile application that enables parents and children to listen to their favorite songs and use them as conversation starters to foster parent-child interaction. From our deployment study with seven families over four weeks in South Korea, we show the potential of DJ-Fam to influence parent-child interaction and their mutual understanding and relationship positively. Specifically, DJ-Fam considerably increases the frequency of communication and diversifies the communication channels and topics, all of which are satisfactory to the participants.
Related papers
- TAVID: Text-Driven Audio-Visual Interactive Dialogue Generation [72.46711449668814]
We introduce TAVID, a unified framework that generates both interactive faces and conversational speech in a synchronized manner.<n>We evaluate our system across four dimensions: talking face realism, listening head responsiveness, dyadic interaction, and speech quality.
arXiv Detail & Related papers (2025-12-23T12:04:23Z) - Autiverse: Eliciting Autistic Adolescents' Daily Narratives through AI-guided Multimodal Journaling [21.638838467146467]
We present Autiverse, an AI-guided multimodal journaling app for tablets.<n>Autiverse elicits key details through a stepwise dialogue with peer-like, customizable AI.<n>It composes them into an editable four-panel comic strip.
arXiv Detail & Related papers (2025-09-22T08:02:09Z) - AACessTalk: Fostering Communication between Minimally Verbal Autistic Children and Parents with Contextual Guidance and Card Recommendation [17.30104178658932]
We present AACessTalk, a tablet-based, AI-mediated communication system.<n>It facilitates meaningful exchanges between an MVA child and a parent.
arXiv Detail & Related papers (2024-09-15T07:23:07Z) - A Survey of Foundation Models for Music Understanding [60.83532699497597]
This work is one of the early reviews of the intersection of AI techniques and music understanding.
We investigated, analyzed, and tested recent large-scale music foundation models in respect of their music comprehension abilities.
arXiv Detail & Related papers (2024-09-15T03:34:14Z) - Exploring Parent's Needs for Children-Centered AI to Support Preschoolers' Interactive Storytelling and Reading Activities [52.828843153565984]
AI-based storytelling and reading technologies are becoming increasingly ubiquitous in preschoolers' lives.
This paper investigates how they function in practical storytelling and reading scenarios and, how parents, the most critical stakeholders, experience and perceive them.
Our findings suggest that even though AI-based storytelling and reading technologies provide more immersive and engaging interaction, they still cannot meet parents' expectations due to a series of interactive and algorithmic challenges.
arXiv Detail & Related papers (2024-01-24T20:55:40Z) - ChaCha: Leveraging Large Language Models to Prompt Children to Share
Their Emotions about Personal Events [6.486346903896692]
ChaCha encourages and guides children to share personal events and associated emotions.
ChaCha combines a state machine and large language models (LLMs) to keep the dialogue on track.
arXiv Detail & Related papers (2023-09-21T16:43:17Z) - Affective Idiosyncratic Responses to Music [63.969810774018775]
We develop methods to measure affective responses to music from over 403M listener comments on a Chinese social music platform.
We test for musical, lyrical, contextual, demographic, and mental health effects that drive listener affective responses.
arXiv Detail & Related papers (2022-10-17T19:57:46Z) - StoryBuddy: A Human-AI Collaborative Chatbot for Parent-Child
Interactive Storytelling with Flexible Parental Involvement [61.47157418485633]
We developed StoryBuddy, an AI-enabled system for parents to create interactive storytelling experiences.
A user study validated StoryBuddy's usability and suggested design insights for future parent-AI collaboration systems.
arXiv Detail & Related papers (2022-02-13T04:53:28Z) - Responsive Listening Head Generation: A Benchmark Dataset and Baseline [58.168958284290156]
We define the responsive listening head generation task as the synthesis of a non-verbal head with motions and expressions reacting to the multiple inputs.
Unlike speech-driven gesture or talking head generation, we introduce more modals in this task, hoping to benefit several research fields.
arXiv Detail & Related papers (2021-12-27T07:18:50Z) - Expressive Communication: A Common Framework for Evaluating Developments
in Generative Models and Steering Interfaces [1.2891210250935146]
This study investigates how developments in both models and user interfaces are important for empowering co-creation.
In an evaluation study with 26 composers creating 100+ pieces of music and listeners providing 1000+ head-to-head comparisons, we find that more expressive models and more steerable interfaces are important.
arXiv Detail & Related papers (2021-11-29T20:57:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.