Action-GPT: Leveraging Large-scale Language Models for Improved and
Generalized Zero Shot Action Generation
- URL: http://arxiv.org/abs/2211.15603v2
- Date: Wed, 30 Nov 2022 13:13:29 GMT
- Title: Action-GPT: Leveraging Large-scale Language Models for Improved and
Generalized Zero Shot Action Generation
- Authors: Sai Shashank Kalakonda, Shubh Maheshwari, Ravi Kiran Sarvadevabhatla
- Abstract summary: Action-GPT is a framework for incorporating Large Language Models into text-based action generation models.
We show that utilizing detailed descriptions instead of the original action phrases leads to better alignment of text and motion spaces.
- Score: 8.753131760384964
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce Action-GPT, a plug and play framework for incorporating Large
Language Models (LLMs) into text-based action generation models. Action phrases
in current motion capture datasets contain minimal and to-the-point
information. By carefully crafting prompts for LLMs, we generate richer and
fine-grained descriptions of the action. We show that utilizing these detailed
descriptions instead of the original action phrases leads to better alignment
of text and motion spaces. Our experiments show qualitative and quantitative
improvement in the quality of synthesized motions produced by recent
text-to-motion models. Code, pretrained models and sample videos will be made
available at https://actiongpt.github.io
Related papers
- MotionLLM: Multimodal Motion-Language Learning with Large Language Models [69.5875073447454]
We propose MotionLLM to achieve single-human, multi-human motion generation and motion captioning.
Specifically, we encode and quantize motions into discrete LLM-understandable tokens, which results in a unified vocabulary consisting of both motion and text tokens.
Our approach is scalable and flexible, allowing easy extension to multi-human motion generation through autoregressive generation of single-human motions.
arXiv Detail & Related papers (2024-05-27T09:57:51Z) - Aligning Actions and Walking to LLM-Generated Textual Descriptions [3.1049440318608568]
Large Language Models (LLMs) have demonstrated remarkable capabilities in various domains.
This work explores the use of LLMs to generate rich textual descriptions for motion sequences, encompassing both actions and walking patterns.
arXiv Detail & Related papers (2024-04-18T13:56:03Z) - Generating Human Interaction Motions in Scenes with Text Control [66.74298145999909]
We present TeSMo, a method for text-controlled scene-aware motion generation based on denoising diffusion models.
Our approach begins with pre-training a scene-agnostic text-to-motion diffusion model.
To facilitate training, we embed annotated navigation and interaction motions within scenes.
arXiv Detail & Related papers (2024-04-16T16:04:38Z) - CoMo: Controllable Motion Generation through Language Guided Pose Code Editing [57.882299081820626]
We introduce CoMo, a Controllable Motion generation model, adept at accurately generating and editing motions.
CoMo decomposes motions into discrete and semantically meaningful pose codes.
It autoregressively generates sequences of pose codes, which are then decoded into 3D motions.
arXiv Detail & Related papers (2024-03-20T18:11:10Z) - Motion Generation from Fine-grained Textual Descriptions [29.033358642532722]
We build a large-scale language-motion dataset specializing in fine-grained textual descriptions, FineHumanML3D.
We design a new text2motion model, FineMotionDiffuse, making full use of fine-grained textual information.
Our evaluation shows that FineMotionDiffuse trained on FineHumanML3D improves FID by a large margin of 0.38, compared with competitive baselines.
arXiv Detail & Related papers (2024-03-20T11:38:30Z) - OMG: Towards Open-vocabulary Motion Generation via Mixture of Controllers [45.808597624491156]
We present OMG, a novel framework, which enables compelling motion generation from zero-shot open-vocabulary text prompts.
At the pre-training stage, our model improves the generation ability by learning the rich out-of-domain inherent motion traits.
At the fine-tuning stage, we introduce motion ControlNet, which incorporates text prompts as conditioning information.
arXiv Detail & Related papers (2023-12-14T14:31:40Z) - Real-time Animation Generation and Control on Rigged Models via Large
Language Models [50.034712575541434]
We introduce a novel method for real-time animation control and generation on rigged models using natural language input.
We embed a large language model (LLM) in Unity to output structured texts that can be parsed into diverse and realistic animations.
arXiv Detail & Related papers (2023-10-27T01:36:35Z) - LLM-grounded Video Diffusion Models [57.23066793349706]
Video diffusion models have emerged as a promising tool for neuraltemporal generation.
Current models struggle with prompts and often restricted or incorrect motion.
We introduce LLM-grounded Video Diffusion (LVD)
Our results demonstrate that LVD significantly outperforms its base video diffusion model.
arXiv Detail & Related papers (2023-09-29T17:54:46Z) - STOA-VLP: Spatial-Temporal Modeling of Object and Action for
Video-Language Pre-training [30.16501510589718]
We propose a pre-training framework that jointly models object and action information across spatial and temporal dimensions.
We design two auxiliary tasks to better incorporate both kinds of information into the pre-training process of the video-language model.
arXiv Detail & Related papers (2023-02-20T03:13:45Z) - Compositional Video Synthesis with Action Graphs [112.94651460161992]
Videos of actions are complex signals containing rich compositional structure in space and time.
We propose to represent the actions in a graph structure called Action Graph and present the new Action Graph To Video'' synthesis task.
Our generative model for this task (AG2Vid) disentangles motion and appearance features, and by incorporating a scheduling mechanism for actions facilitates a timely and coordinated video generation.
arXiv Detail & Related papers (2020-06-27T09:39:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.