Live Avatar: Streaming Real-time Audio-Driven Avatar Generation with Infinite Length
- URL: http://arxiv.org/abs/2512.04677v2
- Date: Fri, 05 Dec 2025 06:32:30 GMT
- Title: Live Avatar: Streaming Real-time Audio-Driven Avatar Generation with Infinite Length
- Authors: Yubo Huang, Hailong Guo, Fangtai Wu, Shifeng Zhang, Shijie Huang, Qijun Gan, Lin Liu, Sirui Zhao, Enhong Chen, Jiaming Liu, Steven Hoi,
- Abstract summary: We present Live Avatar, an algorithm-system co-designed framework for efficient, high-fidelity, and infinite-length avatar generation.<n>Live Avatar is first to achieve practical, real-time, high-fidelity avatar generation at this scale.
- Score: 57.458450695137664
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Existing diffusion-based video generation methods are fundamentally constrained by sequential computation and long-horizon inconsistency, limiting their practical adoption in real-time, streaming audio-driven avatar synthesis. We present Live Avatar, an algorithm-system co-designed framework that enables efficient, high-fidelity, and infinite-length avatar generation using a 14-billion-parameter diffusion model. Our approach introduces Timestep-forcing Pipeline Parallelism (TPP), a distributed inference paradigm that pipelines denoising steps across multiple GPUs, effectively breaking the autoregressive bottleneck and ensuring stable, low-latency real-time streaming. To further enhance temporal consistency and mitigate identity drift and color artifacts, we propose the Rolling Sink Frame Mechanism (RSFM), which maintains sequence fidelity by dynamically recalibrating appearance using a cached reference image. Additionally, we leverage Self-Forcing Distribution Matching Distillation to facilitate causal, streamable adaptation of large-scale models without sacrificing visual quality. Live Avatar demonstrates state-of-the-art performance, reaching 20 FPS end-to-end generation on 5 H800 GPUs, and, to the best of our knowledge, is the first to achieve practical, real-time, high-fidelity avatar generation at this scale. Our work establishes a new paradigm for deploying advanced diffusion models in industrial long-form video synthesis applications.
Related papers
- VideoAR: Autoregressive Video Generation via Next-Frame & Scale Prediction [31.191310873846177]
VideoAR is the first large-scale Visual Autoregressive framework for video generation that combines multi-scale next-frame prediction with autoregressive modeling.<n>VideoAR disentangles spatial and temporal dependencies by integrating intra-frame VAR with causal next-frame prediction, supported by a 3D multi-scale tokenizer.<n> Empirically, VideoAR achieves new state-of-the-art results improving resolutions among autoregressive models, FVD on UCF-101 from 99.5 to 88.6 while reducing inference steps over 10x, and reaching a VBench score of 81.74-competitive with diffusion-based
arXiv Detail & Related papers (2026-01-09T17:34:59Z) - StreamAvatar: Streaming Diffusion Models for Real-Time Interactive Human Avatars [32.75338796722652]
We propose a two-stage autoregressive adaptation and acceleration framework to adapt a high-fidelity human video diffusion model for real-time, interactive streaming.<n>We develop a one-shot, interactive, human avatar model capable of generating both natural talking and listening behaviors with coherent gestures.<n>Our method achieves state-of-the-art performance, surpassing existing approaches in generation quality, real-time efficiency, and interaction naturalness.
arXiv Detail & Related papers (2025-12-26T15:41:24Z) - JoyAvatar: Real-time and Infinite Audio-Driven Avatar Generation with Autoregressive Diffusion [19.420963062956222]
JoyAvatar is an audio-driven autoregressive model capable of real-time inference and infinite-length video generation.<n>Our model achieves competitive results in visual quality, temporal consistency, and lip synchronization.
arXiv Detail & Related papers (2025-12-12T10:06:01Z) - StreamDiffusionV2: A Streaming System for Dynamic and Interactive Video Generation [65.90400162290057]
Generative models are reshaping the live-streaming industry by redefining how content is created, styled, and delivered.<n>Recent advances in video diffusion have markedly improved temporal consistency and sampling efficiency for offline generation.<n>Live online streaming operates under strict service-level objectives (SLOs): time-to-first-frame must be minimal, and every frame must meet a per-frame deadline with low jitter.
arXiv Detail & Related papers (2025-11-10T18:51:28Z) - Uniform Discrete Diffusion with Metric Path for Video Generation [103.86033350602908]
Continuous-space video generation has advanced rapidly, while discrete approaches lag behind due to error accumulation and long-duration inconsistency.<n>We present Uniform generative modeling and present Uniform pAth (URSA), a powerful framework that bridges the gap with continuous approaches for scalable video generation.<n>URSA consistently outperforms existing discrete methods and achieves performance comparable to state-of-the-art continuous diffusion methods.
arXiv Detail & Related papers (2025-10-28T17:59:57Z) - StableAvatar: Infinite-Length Audio-Driven Avatar Video Generation [91.45910771331741]
Current diffusion models for audio-driven avatar video generation struggle to synthesize long videos with natural audio synchronization and identity consistency.<n>This paper presents StableAvatar, the first end-to-end video diffusion transformer that synthesizes infinite-length high-quality videos without post-processing.
arXiv Detail & Related papers (2025-08-11T17:58:24Z) - Self Forcing: Bridging the Train-Test Gap in Autoregressive Video Diffusion [67.94300151774085]
We introduce Self Forcing, a novel training paradigm for autoregressive video diffusion models.<n>It addresses the longstanding issue of exposure bias, where models trained on ground-truth context must generate sequences conditioned on their own imperfect outputs.
arXiv Detail & Related papers (2025-06-09T17:59:55Z) - LLIA -- Enabling Low-Latency Interactive Avatars: Real-Time Audio-Driven Portrait Video Generation with Diffusion Models [17.858801012726445]
Diffusion-based models have gained wide adoption in the virtual human generation due to their outstanding expressiveness.<n>We present a novel audio-driven portrait video generation framework based on the diffusion model to address these challenges.<n>Our model achieves a maximum of 78 FPS at a resolution of 384x384 and 45 FPS at a resolution of 512x512, with an initial video generation latency of 140 ms and 215 ms, respectively.
arXiv Detail & Related papers (2025-06-06T07:09:07Z) - Streaming Generation of Co-Speech Gestures via Accelerated Rolling Diffusion [0.881371061335494]
We introduce Accelerated Rolling Diffusion, a novel framework for streaming gesture generation.<n>RDLA restructures the noise schedule into a stepwise ladder, allowing multiple frames to be denoised simultaneously.<n>This significantly improves sampling efficiency while maintaining motion consistency, achieving up to a 2x speedup.
arXiv Detail & Related papers (2025-03-13T15:54:45Z) - Efficient Long-duration Talking Video Synthesis with Linear Diffusion Transformer under Multimodal Guidance [39.94595889521696]
LetsTalk is a diffusion transformer framework equipped with multimodal guidance and a novel memory bank mechanism.<n>In particular, LetsTalk introduces a noise-regularized memory bank to alleviate error accumulation and sampling artifacts during extended video generation.<n>We show that LetsTalk establishes new state-of-the-art in generation quality, producing temporally coherent and realistic talking videos.
arXiv Detail & Related papers (2024-11-24T04:46:00Z) - Upscale-A-Video: Temporal-Consistent Diffusion Model for Real-World
Video Super-Resolution [65.91317390645163]
Upscale-A-Video is a text-guided latent diffusion framework for video upscaling.
It ensures temporal coherence through two key mechanisms: locally, it integrates temporal layers into U-Net and VAE-Decoder, maintaining consistency within short sequences.
It also offers greater flexibility by allowing text prompts to guide texture creation and adjustable noise levels to balance restoration and generation.
arXiv Detail & Related papers (2023-12-11T18:54:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.