Imperatives for Virtual Humans
- URL: http://arxiv.org/abs/2004.10014v1
- Date: Sun, 19 Apr 2020 12:47:15 GMT
- Title: Imperatives for Virtual Humans
- Authors: Weizi Li and Jan M. Allbeck
- Abstract summary: This paper presents a framework inspired by natural language constructs that aims to author more reasonable virtual human behaviors.
We focus mainly on object types and properties, quantifiers, determiners, and spatial relations.
- Score: 3.1473798197405944
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Seemingly since the inception of virtual humans, there has been an effort to
make their behaviors more natural and human-like. In additions to improving
movement's visual quality, there has been considerable research focused on
creating more intelligent virtual characters. This paper presents a framework
inspired by natural language constructs that aims to author more reasonable
virtual human behaviors using structured English input. We focus mainly on
object types and properties, quantifiers, determiners, and spatial relations.
The framework provides a natural, flexible authoring system for simulating
human behaviors.
Related papers
- ViRAC: A Vision-Reasoning Agent Head Movement Control Framework in Arbitrary Virtual Environments [0.13654846342364302]
We propose ViRAC, which exploits the common-sense knowledge and reasoning capabilities of large-scale models.
ViRAC produces more natural and context-aware head rotations than recent state-of-the-art techniques.
arXiv Detail & Related papers (2025-02-14T09:46:43Z) - Motion Generation Review: Exploring Deep Learning for Lifelike Animation with Manifold [4.853986914715961]
Human motion generation involves creating natural sequences of human body poses, widely used in gaming, virtual reality, and human-computer interaction.
Previous work has focused on motion generation based on signals like movement, music, text, or scene background.
Mandela learning offers a solution by reducing data dimensionality and capturing subspaces of effective motion.
arXiv Detail & Related papers (2024-12-12T08:27:15Z) - Unbounded: A Generative Infinite Game of Character Life Simulation [68.37260000219479]
We introduce the concept of a generative infinite game, a video game that transcends the traditional boundaries of finite, hard-coded systems by using generative models.
We leverage recent advances in generative AI to create Unbounded: a game of character life simulation that is fully encapsulated in generative models.
arXiv Detail & Related papers (2024-10-24T17:59:31Z) - Human Simulacra: Benchmarking the Personification of Large Language Models [38.21708264569801]
Large language models (LLMs) are recognized as systems that closely mimic aspects of human intelligence.
This paper introduces a framework for constructing virtual characters' life stories from the ground up.
Experimental results demonstrate that our constructed simulacra can produce personified responses that align with their target characters.
arXiv Detail & Related papers (2024-02-28T09:11:14Z) - Digital Life Project: Autonomous 3D Characters with Social Intelligence [86.2845109451914]
Digital Life Project is a framework utilizing language as the universal medium to build autonomous 3D characters.
Our framework comprises two primary components: SocioMind and MoMat-MoGen.
arXiv Detail & Related papers (2023-12-07T18:58:59Z) - Visual cognition in multimodal large language models [12.603212933816206]
Recent advancements have rekindled interest in the potential to emulate human-like cognitive abilities.
This paper evaluates the current state of vision-based large language models in the domains of intuitive physics, causal reasoning, and intuitive psychology.
arXiv Detail & Related papers (2023-11-27T18:58:34Z) - User Behavior Simulation with Large Language Model based Agents [116.74368915420065]
We propose an LLM-based agent framework and design a sandbox environment to simulate real user behaviors.
Based on extensive experiments, we find that the simulated behaviors of our method are very close to the ones of real humans.
arXiv Detail & Related papers (2023-06-05T02:58:35Z) - Force-Aware Interface via Electromyography for Natural VR/AR Interaction [69.1332992637271]
We design a learning-based neural interface for natural and intuitive force inputs in VR/AR.
We show that our interface can decode finger-wise forces in real-time with 3.3% mean error, and generalize to new users with little calibration.
We envision our findings to push forward research towards more realistic physicality in future VR/AR.
arXiv Detail & Related papers (2022-10-03T20:51:25Z) - The Gesture Authoring Space: Authoring Customised Hand Gestures for
Grasping Virtual Objects in Immersive Virtual Environments [81.5101473684021]
This work proposes a hand gesture authoring tool for object specific grab gestures allowing virtual objects to be grabbed as in the real world.
The presented solution uses template matching for gesture recognition and requires no technical knowledge to design and create custom tailored hand gestures.
The study showed that gestures created with the proposed approach are perceived by users as a more natural input modality than the others.
arXiv Detail & Related papers (2022-07-03T18:33:33Z) - DASH: Modularized Human Manipulation Simulation with Vision and Language
for Embodied AI [25.144827619452105]
We present Dynamic and Autonomous Simulated Human (DASH), an embodied virtual human that, given natural language commands, performs grasp-and-stack tasks in a physically-simulated cluttered environment.
By factoring the DASH system into a vision module, a language module, and manipulation modules of two skill categories, we can mix and match analytical and machine learning techniques for different modules so that DASH is able to not only perform randomly arranged tasks with a high success rate, but also do so under anthropomorphic constraints.
arXiv Detail & Related papers (2021-08-28T00:22:30Z) - BEHAVIOR: Benchmark for Everyday Household Activities in Virtual,
Interactive, and Ecological Environments [70.18430114842094]
We introduce BEHAVIOR, a benchmark for embodied AI with 100 activities in simulation.
These activities are designed to be realistic, diverse, and complex.
We include 500 human demonstrations in virtual reality (VR) to serve as the human ground truth.
arXiv Detail & Related papers (2021-08-06T23:36:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.