Perspectives from Naive Participants and Experienced Social Science
Researchers on Addressing Embodiment in a Virtual Cyberball Task
- URL: http://arxiv.org/abs/2312.02897v1
- Date: Tue, 5 Dec 2023 17:09:59 GMT
- Title: Perspectives from Naive Participants and Experienced Social Science
Researchers on Addressing Embodiment in a Virtual Cyberball Task
- Authors: Tao Long, Swati Pandita, Andrea Stevenson Won
- Abstract summary: We describe the design of an immersive virtual Cyberball task that included avatar customization, and user feedback on this design.
We conducted in-depth user testing and feedback sessions with 15 Cyberball stakeholders.
- Score: 7.715638414214042
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We describe the design of an immersive virtual Cyberball task that included
avatar customization, and user feedback on this design. We first created a
prototype of an avatar customization template and added it to a Cyberball
prototype built in the Unity3D game engine. Then, we conducted in-depth user
testing and feedback sessions with 15 Cyberball stakeholders: five naive
participants with no prior knowledge of Cyberball and ten experienced
researchers with extensive experience using the Cyberball paradigm. We report
the divergent perspectives of the two groups on the following design insights;
designing for intuitive use, inclusivity, and realistic experiences versus
minimalism. Participant responses shed light on how system design problems may
contribute to or perpetuate negative experiences when customizing avatars. They
also demonstrate the value of considering multiple stakeholders' feedback in
the design process for virtual reality, presenting a more comprehensive view in
designing future Cyberball prototypes and interactive systems for social
science research.
Related papers
- MC-LLaVA: Multi-Concept Personalized Vision-Language Model [51.645660375766575]
This paper proposes the first multi-concept personalization paradigm, MC-LLaVA.
MC-LLaVA employs a multi-concept instruction tuning strategy, effectively integrating multiple concepts in a single training step.
Comprehensive qualitative and quantitative experiments demonstrate that MC-LLaVA can achieve impressive multi-concept personalized responses.
arXiv Detail & Related papers (2025-03-24T16:32:17Z) - MC-LLaVA: Multi-Concept Personalized Vision-Language Model [51.645660375766575]
This paper proposes the first multi-concept personalization paradigm, MC-LLaVA.
MC-LLaVA employs a multi-concept instruction tuning strategy, effectively integrating multiple concepts in a single training step.
Comprehensive qualitative and quantitative experiments demonstrate that MC-LLaVA can achieve impressive multi-concept personalized responses.
arXiv Detail & Related papers (2024-11-18T16:33:52Z) - Sketch2Code: Evaluating Vision-Language Models for Interactive Web Design Prototyping [55.98643055756135]
We introduce Sketch2Code, a benchmark that evaluates state-of-the-art Vision Language Models (VLMs) on automating the conversion of rudimentary sketches into webpage prototypes.
We analyze ten commercial and open-source models, showing that Sketch2Code is challenging for existing VLMs.
A user study with UI/UX experts reveals a significant preference for proactive question-asking over passive feedback reception.
arXiv Detail & Related papers (2024-10-21T17:39:49Z) - Freeview Sketching: View-Aware Fine-Grained Sketch-Based Image Retrieval [85.73149096516543]
We address the choice of viewpoint during sketch creation in Fine-Grained Sketch-Based Image Retrieval (FG-SBIR)
A pilot study highlights the system's struggle when query-sketches differ in viewpoint from target instances.
To reconcile this, we advocate for a view-aware system, seamlessly accommodating both view-agnostic and view-specific tasks.
arXiv Detail & Related papers (2024-07-01T21:20:44Z) - The Ink Splotch Effect: A Case Study on ChatGPT as a Co-Creative Game
Designer [2.778721019132512]
This paper studies how large language models (LLMs) can act as effective, high-level creative collaborators and muses'' for game design.
Our goal is to determine whether AI-assistance can improve, hinder, or provide an alternative quality to games when compared to the creative intents implemented by human designers.
arXiv Detail & Related papers (2024-03-04T20:14:38Z) - VR-GS: A Physical Dynamics-Aware Interactive Gaussian Splatting System in Virtual Reality [39.53150683721031]
Our proposed VR-GS system represents a leap forward in human-centered 3D content interaction.
The components of our Virtual Reality system are designed for high efficiency and effectiveness.
arXiv Detail & Related papers (2024-01-30T01:28:36Z) - GPAvatar: Generalizable and Precise Head Avatar from Image(s) [71.555405205039]
GPAvatar is a framework that reconstructs 3D head avatars from one or several images in a single forward pass.
The proposed method achieves faithful identity reconstruction, precise expression control, and multi-view consistency.
arXiv Detail & Related papers (2024-01-18T18:56:34Z) - Agile Modeling: From Concept to Classifier in Minutes [35.03003329814567]
We introduce the problem of Agile Modeling: the process of turning any subjective visual concept into a computer vision model.
We show through a user study that users can create classifiers with minimal effort under 30 minutes.
We compare this user driven process with the traditional crowdsourcing paradigm and find that the crowd's notion often differs from that of the user's.
arXiv Detail & Related papers (2023-02-25T01:18:09Z) - Force-Aware Interface via Electromyography for Natural VR/AR Interaction [69.1332992637271]
We design a learning-based neural interface for natural and intuitive force inputs in VR/AR.
We show that our interface can decode finger-wise forces in real-time with 3.3% mean error, and generalize to new users with little calibration.
We envision our findings to push forward research towards more realistic physicality in future VR/AR.
arXiv Detail & Related papers (2022-10-03T20:51:25Z) - EgoRenderer: Rendering Human Avatars from Egocentric Camera Images [87.96474006263692]
We present EgoRenderer, a system for rendering full-body neural avatars of a person captured by a wearable, egocentric fisheye camera.
Rendering full-body avatars from such egocentric images come with unique challenges due to the top-down view and large distortions.
We tackle these challenges by decomposing the rendering process into several steps, including texture synthesis, pose construction, and neural image translation.
arXiv Detail & Related papers (2021-11-24T18:33:02Z) - Game and Simulation Design for Studying Pedestrian-Automated Vehicle
Interactions [1.3764085113103217]
We first present contemporary tools in the field and then propose the design and development of a new application that facilitates pedestrian point of view research.
We conduct a three-step user experience experiment where participants answer questions before and after using the application in various scenarios.
arXiv Detail & Related papers (2021-09-30T15:26:18Z) - Automatic Recommendation of Strategies for Minimizing Discomfort in
Virtual Environments [58.720142291102135]
In this work, we first present a detailed review about possible causes of Cybersickness (CS)
Our system is able to suggest if the user may be entering in the next moments of the application into an illness situation.
The CSPQ (Cybersickness Profile Questionnaire) is also proposed, which is used to identify the player's susceptibility to CS.
arXiv Detail & Related papers (2020-06-27T19:28:48Z) - DeFINE: Delayed Feedback based Immersive Navigation Environment for
Studying Goal-Directed Human Navigation [10.7197371210731]
Delayed Feedback based Immersive Navigation Environment (DeFINE) is a framework that allows for easy creation and administration of navigation tasks.
DeFINE has a built-in capability to provide performance feedback to participants during an experiment.
arXiv Detail & Related papers (2020-03-06T11:00:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.