Love in Action: Gamifying Public Video Cameras for Fostering Social Relationships in Real World
- URL: http://arxiv.org/abs/2411.10449v1
- Date: Thu, 31 Oct 2024 02:38:40 GMT
- Title: Love in Action: Gamifying Public Video Cameras for Fostering Social Relationships in Real World
- Authors: Zhang Zhang, Da Li, Geng Wu, Yaoning Li, Xiaobing Sun, Liang Wang,
- Abstract summary: We create "Love in Action" (LIA), a body language-based social game utilizing video cameras installed in public spaces.
A two-week field study involving 27 participants shows significant improvements in their social friendships.
Users experiences are investigated to highlight the potential of public video cameras as a novel communication medium for socializing in public spaces.
- Score: 15.049706359599666
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we create "Love in Action" (LIA), a body language-based social game utilizing video cameras installed in public spaces to enhance social relationships in real-world. In the game, participants assume dual roles, i.e., requesters, who issue social requests, and performers, who respond social requests through performing specified body languages. To mediate the communication between participants, we build an AI-enhanced video analysis system incorporating multiple visual analysis modules like person detection, attribute recognition, and action recognition, to assess the performer's body language quality. A two-week field study involving 27 participants shows significant improvements in their social friendships, as indicated by self-reported questionnaires. Moreover, user experiences are investigated to highlight the potential of public video cameras as a novel communication medium for socializing in public spaces.
Related papers
- The Rise of AI Agent Communities: Large-Scale Analysis of Discourse and Interaction on Moltbook [62.2627874717318]
Moltbook is a Reddit-like social platform where AI agents create posts and interact with other agents through comments and replies.<n>Using a public API snapshot collected about five days after launch, we address three research questions: what AI agents discuss, how they post, and how they interact.<n>We show that agents' writing is predominantly neutral, with positivity appearing in community engagement and assistance-oriented content.
arXiv Detail & Related papers (2026-02-13T05:28:31Z) - CompanionCast: A Multi-Agent Conversational AI Framework with Spatial Audio for Social Co-Viewing Experiences [10.985715950187519]
Social presence is central to the enjoyment of watching content together, yet modern media consumption is increasingly solitary.<n>We investigate whether multi-agent conversational AI systems can recreate the dynamics of shared viewing experiences across diverse content types.<n>We present CompanionCast, a general framework for orchestrating multiple role-specialized AI agents that respond to video content.
arXiv Detail & Related papers (2025-12-11T18:44:44Z) - LIFELONG SOTOPIA: Evaluating Social Intelligence of Language Agents Over Lifelong Social Interactions [4.819825467587802]
We present a novel benchmark, LIFELONG-SOTOPIA, to perform a comprehensive evaluation of language agents.<n>We find that goal achievement and believability of all of the language models that we test decline through the whole interaction.<n>These findings show that we can use LIFELONG-SOTOPIA to evaluate the social intelligence of language agents over lifelong social interactions.
arXiv Detail & Related papers (2025-06-14T23:57:54Z) - SIV-Bench: A Video Benchmark for Social Interaction Understanding and Reasoning [53.16179295245888]
We introduce SIV-Bench, a novel video benchmark for evaluating the capabilities of Multimodal Large Language Models (MLLMs) across Social Scene Understanding (SSU), Social State Reasoning (SSR), and Social Dynamics Prediction (SDP)<n>SIV-Bench features 2,792 video clips and 8,792 meticulously generated question-answer pairs derived from a human-LLM collaborative pipeline.<n>It also includes a dedicated setup for analyzing the impact of different textual cues-original on-screen text, added dialogue, or no text.
arXiv Detail & Related papers (2025-06-05T05:51:35Z) - Leveraging LLMs with Iterative Loop Structure for Enhanced Social Intelligence in Video Question Answering [13.775516653315103]
Social intelligence is essential for effective communication and adaptive responses.
Current video-based methods for social intelligence rely on general video recognition or emotion recognition techniques.
We propose the Looped Video Debating framework, which integrates Large Language Models with visual information.
arXiv Detail & Related papers (2025-03-27T06:14:21Z) - MimeQA: Towards Socially-Intelligent Nonverbal Foundation Models [27.930709161679424]
We tap into a novel data source rich in nonverbal social interactions -- mime videos.<n>We contribute a new dataset called MimeQA, obtained by sourcing 8 hours of videos clips from YouTube.<n>We evaluate state-of-the-art video large language models (vLLMs) and find that they achieve low overall accuracy, ranging from 20-30%, while humans score 86%.
arXiv Detail & Related papers (2025-02-23T18:05:49Z) - SocialMind: LLM-based Proactive AR Social Assistive System with Human-like Perception for In-situ Live Interactions [3.7400236988012105]
SocialMind is the first proactive AR social assistive system that provides users with in-situ social assistance.
SocialMind employs human-like perception leveraging multi-modal sensors to extract both verbal and nonverbal cues, social factors, and implicit personas.
We show that SocialMind achieves 38.3% higher engagement compared to baselines, and 95% of participants are willing to use SocialMind in their live social interactions.
arXiv Detail & Related papers (2024-12-05T10:19:36Z) - The influence of persona and conversational task on social interactions with a LLM-controlled embodied conversational agent [40.26872152499122]
Embodying an LLM as a virtual human allows users to engage in face-to-face social interactions in Virtual Reality.
The influence of person- and task-related factors in social interactions with LLM-controlled agents remains unclear.
arXiv Detail & Related papers (2024-11-08T15:49:42Z) - Social Support Detection from Social Media Texts [44.096359084699]
Social support, conveyed through a multitude of interactions and platforms such as social media, plays a pivotal role in fostering a sense of belonging.
This paper introduces Social Support Detection (SSD) as a Natural language processing (NLP) task aimed at identifying supportive interactions.
We conducted experiments on a dataset comprising 10,000 YouTube comments.
arXiv Detail & Related papers (2024-11-04T20:23:03Z) - From a Social Cognitive Perspective: Context-aware Visual Social Relationship Recognition [59.57095498284501]
We propose a novel approach that recognizes textbfContextual textbfSocial textbfRelationships (textbfConSoR) from a social cognitive perspective.
We construct social-aware descriptive language prompts with social relationships for each image.
Impressively, ConSoR outperforms previous methods with a 12.2% gain on the People-in-Social-Context (PISC) dataset and a 9.8% increase on the People-in-Photo-Album (PIPA) benchmark.
arXiv Detail & Related papers (2024-06-12T16:02:28Z) - Designing and Evaluating Dialogue LLMs for Co-Creative Improvised Theatre [48.19823828240628]
This study presents Large Language Models (LLMs) deployed in a month-long live show at the Edinburgh Festival Fringe.
We explore the technical capabilities and constraints of on-the-spot multi-party dialogue.
Our human-in-the-loop methodology underlines the challenges of these LLMs in generating context-relevant responses.
arXiv Detail & Related papers (2024-05-11T23:19:42Z) - SOTOPIA: Interactive Evaluation for Social Intelligence in Language Agents [107.4138224020773]
We present SOTOPIA, an open-ended environment to simulate complex social interactions between artificial agents and humans.
In our environment, agents role-play and interact under a wide variety of scenarios; they coordinate, collaborate, exchange, and compete with each other to achieve complex social goals.
We find that GPT-4 achieves a significantly lower goal completion rate than humans and struggles to exhibit social commonsense reasoning and strategic communication skills.
arXiv Detail & Related papers (2023-10-18T02:27:01Z) - Co-Located Human-Human Interaction Analysis using Nonverbal Cues: A
Survey [71.43956423427397]
We aim to identify the nonverbal cues and computational methodologies resulting in effective performance.
This survey differs from its counterparts by involving the widest spectrum of social phenomena and interaction settings.
Some major observations are: the most often used nonverbal cue, computational method, interaction environment, and sensing approach are speaking activity, support vector machines, and meetings composed of 3-4 persons equipped with microphones and cameras, respectively.
arXiv Detail & Related papers (2022-07-20T13:37:57Z) - PHASE: PHysically-grounded Abstract Social Events for Machine Social
Perception [50.551003004553806]
We create a dataset of physically-grounded abstract social events, PHASE, that resemble a wide range of real-life social interactions.
Phase is validated with human experiments demonstrating that humans perceive rich interactions in the social events.
As a baseline model, we introduce a Bayesian inverse planning approach, SIMPLE, which outperforms state-of-the-art feed-forward neural networks.
arXiv Detail & Related papers (2021-03-02T18:44:57Z) - Can You be More Social? Injecting Politeness and Positivity into
Task-Oriented Conversational Agents [60.27066549589362]
Social language used by human agents is associated with greater users' responsiveness and task completion.
The model uses a sequence-to-sequence deep learning architecture, extended with a social language understanding element.
Evaluation in terms of content preservation and social language level using both human judgment and automatic linguistic measures shows that the model can generate responses that enable agents to address users' issues in a more socially appropriate way.
arXiv Detail & Related papers (2020-12-29T08:22:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.