REGEN: Real-Time Photorealism Enhancement in Games via a Dual-Stage Generative Network Framework
- URL: http://arxiv.org/abs/2508.17061v1
- Date: Sat, 23 Aug 2025 15:28:05 GMT
- Title: REGEN: Real-Time Photorealism Enhancement in Games via a Dual-Stage Generative Network Framework
- Authors: Stefanos Pasios, Nikos Nikolaidis,
- Abstract summary: We present a novel approach for enhancing the photorealism of rendered game frames using generative adversarial networks.<n>We propose Real-time photorealism Enhancement in Games via a dual-stage gEnerative Network framework (REGEN)<n>We demonstrate the effectiveness of our framework on Grand Theft Auto V, showing that the approach achieves visual results comparable to the ones produced by the robust unpaired Im2Im method.
- Score: 2.478819644330144
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Photorealism is an important aspect of modern video games since it can shape the player experience and simultaneously impact the immersion, narrative engagement, and visual fidelity. Although recent hardware technological breakthroughs, along with state-of-the-art rendering technologies, have significantly improved the visual realism of video games, achieving true photorealism in dynamic environments at real-time frame rates still remains a major challenge due to the tradeoff between visual quality and performance. In this short paper, we present a novel approach for enhancing the photorealism of rendered game frames using generative adversarial networks. To this end, we propose Real-time photorealism Enhancement in Games via a dual-stage gEnerative Network framework (REGEN), which employs a robust unpaired image-to-image translation model to produce semantically consistent photorealistic frames that transform the problem into a simpler paired image-to-image translation task. This enables training with a lightweight method that can achieve real-time inference time without compromising visual quality. We demonstrate the effectiveness of our framework on Grand Theft Auto V, showing that the approach achieves visual results comparable to the ones produced by the robust unpaired Im2Im method while improving inference speed by 32.14 times. Our findings also indicate that the results outperform the photorealism-enhanced frames produced by directly training a lightweight unpaired Im2Im translation method to translate the video game frames towards the visual characteristics of real-world images. Code, pre-trained models, and demos for this work are available at: https://github.com/stefanos50/REGEN.
Related papers
- ViSA: 3D-Aware Video Shading for Real-Time Upper-Body Avatar Creation [62.86900540547787]
Current 3D avatar generation methods often suffer from artifacts such as blurry textures and stiff, unnatural motion.<n>We propose a novel approach that combines the strengths of both paradigms.<n>By uniting the geometric stability of 3D reconstruction with the generative capabilities of video models, our method produces high-fidelity digital avatars.
arXiv Detail & Related papers (2025-12-08T17:10:29Z) - RealGen: Photorealistic Text-to-Image Generation via Detector-Guided Rewards [53.25632969696776]
We propose RealGen, a text-to-image framework for photorealistic image generation.<n>Inspired by adversarial generation, RealGen introduces a "Detector Reward" mechanism, which quantifies artifacts and assesses realism.<n>Experiments demonstrate that RealGen significantly outperforms general models like GPT-Image-1 and Qwen-Image, as well as specialized photorealistic models like FLUX-Krea.
arXiv Detail & Related papers (2025-11-29T12:52:26Z) - Every Painting Awakened: A Training-free Framework for Painting-to-Animation Generation [25.834500552609136]
We introduce a training-free framework specifically designed to bring real-world static paintings to life through image-to-video (I2V) synthesis.<n>Existing I2V methods, primarily trained on natural video datasets, often struggle to generate dynamic outputs from static paintings.<n>Our framework enables plug-and-play integration with existing I2V methods, making it an ideal solution for animating real-world paintings.
arXiv Detail & Related papers (2025-03-31T05:25:49Z) - Directing Mamba to Complex Textures: An Efficient Texture-Aware State Space Model for Image Restoration [75.51789992466183]
TAMambaIR simultaneously perceives image textures achieves and a trade-off between performance and efficiency.<n>Extensive experiments on benchmarks for image super-resolution, deraining, and low-light image enhancement demonstrate that TAMambaIR achieves state-of-the-art performance with significantly improved efficiency.
arXiv Detail & Related papers (2025-01-27T23:53:49Z) - FashionR2R: Texture-preserving Rendered-to-Real Image Translation with Diffusion Models [14.596090302381647]
This paper studies photorealism enhancement of rendered images, leveraging generative power from diffusion models on the controlled basis of rendering.
We introduce a novel framework to translate rendered images into their realistic counterparts, which consists of two stages: Domain Knowledge Injection (DKI) and Realistic Image Generation (RIG)
arXiv Detail & Related papers (2024-10-18T12:48:22Z) - Real-Time Neural Rasterization for Large Scenes [39.198327570559684]
We propose a new method for realistic real-time novel-view synthesis of large scenes.
Existing neural rendering methods generate realistic results, but primarily work for small scale scenes.
Our work is the first to enable real-time rendering of large real-world scenes.
arXiv Detail & Related papers (2023-11-09T18:59:10Z) - Towards Practical Capture of High-Fidelity Relightable Avatars [60.25823986199208]
TRAvatar is trained with dynamic image sequences captured in a Light Stage under varying lighting conditions.
It can predict the appearance in real-time with a single forward pass, achieving high-quality relighting effects.
Our framework achieves superior performance for photorealistic avatar animation and relighting.
arXiv Detail & Related papers (2023-09-08T10:26:29Z) - Real-time Virtual-Try-On from a Single Example Image through Deep
Inverse Graphics and Learned Differentiable Renderers [13.894134334543363]
We propose a novel framework based on deep learning to build a real-time inverse graphics encoder.
Our imitator is a generative network that learns to accurately reproduce the behavior of a given non-differentiable image.
Our framework enables novel applications where consumers can virtually try-on a novel unknown product from an inspirational reference image.
arXiv Detail & Related papers (2022-05-12T18:44:00Z) - DIB-R++: Learning to Predict Lighting and Material with a Hybrid
Differentiable Renderer [78.91753256634453]
We consider the challenging problem of predicting intrinsic object properties from a single image by exploiting differentiables.
In this work, we propose DIBR++, a hybrid differentiable which supports these effects by combining specularization and ray-tracing.
Compared to more advanced physics-based differentiables, DIBR++ is highly performant due to its compact and expressive model.
arXiv Detail & Related papers (2021-10-30T01:59:39Z) - Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image
Decomposition [67.9464567157846]
We propose an autoencoder for joint generation of realistic images from synthetic 3D models while simultaneously decomposing real images into their intrinsic shape and appearance properties.
Our experiments confirm that a joint treatment of rendering and decomposition is indeed beneficial and that our approach outperforms state-of-the-art image-to-image translation baselines both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-06-29T12:53:58Z) - Deep CG2Real: Synthetic-to-Real Translation via Image Disentanglement [78.58603635621591]
Training an unpaired synthetic-to-real translation network in image space is severely under-constrained.
We propose a semi-supervised approach that operates on the disentangled shading and albedo layers of the image.
Our two-stage pipeline first learns to predict accurate shading in a supervised fashion using physically-based renderings as targets.
arXiv Detail & Related papers (2020-03-27T21:45:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.