GAN-Based Multi-View Video Coding with Spatio-Temporal EPI
Reconstruction
- URL: http://arxiv.org/abs/2205.03599v2
- Date: Fri, 5 May 2023 17:19:31 GMT
- Title: GAN-Based Multi-View Video Coding with Spatio-Temporal EPI
Reconstruction
- Authors: Chengdong Lan, Hao Yan, Cheng Luo, Tiesong Zhao
- Abstract summary: We propose a novel multi-view video coding method that leverages the image generation capabilities of Generative Adrial Network (GAN)
At the encoder, we construct atemporal Epipolar Plane Image (EPI) decoder and further utilize a convolutional network to extract the latent code of a GAN as Side Information (SI)
At the side, we combine SI and adjacent viewpoints to reconstruct intermediate views using the GAN generator.
- Score: 19.919826392704472
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The introduction of multiple viewpoints in video scenes inevitably increases
the bitrates required for storage and transmission. To reduce bitrates,
researchers have developed methods to skip intermediate viewpoints during
compression and delivery, and ultimately reconstruct them using Side
Information (SI). Typically, depth maps are used to construct SI. However,
their methods suffer from inaccuracies in reconstruction and inherently high
bitrates. In this paper, we propose a novel multi-view video coding method that
leverages the image generation capabilities of Generative Adversarial Network
(GAN) to improve the reconstruction accuracy of SI. Additionally, we consider
incorporating information from adjacent temporal and spatial viewpoints to
further reduce SI redundancy. At the encoder, we construct a spatio-temporal
Epipolar Plane Image (EPI) and further utilize a convolutional network to
extract the latent code of a GAN as SI. At the decoder side, we combine the SI
and adjacent viewpoints to reconstruct intermediate views using the GAN
generator. Specifically, we establish a joint encoder constraint for
reconstruction cost and SI entropy to achieve an optimal trade-off between
reconstruction quality and bitrates overhead. Experiments demonstrate
significantly improved Rate-Distortion (RD) performance compared with
state-of-the-art methods.
Related papers
- $ε$-VAE: Denoising as Visual Decoding [61.29255979767292]
In generative modeling, tokenization simplifies complex data into compact, structured representations, creating a more efficient, learnable space.
Current visual tokenization methods rely on a traditional autoencoder framework, where the encoder compresses data into latent representations, and the decoder reconstructs the original input.
We propose denoising as decoding, shifting from single-step reconstruction to iterative refinement. Specifically, we replace the decoder with a diffusion process that iteratively refines noise to recover the original image, guided by the latents provided by the encoder.
We evaluate our approach by assessing both reconstruction (rFID) and generation quality (
arXiv Detail & Related papers (2024-10-05T08:27:53Z) - In-Domain GAN Inversion for Faithful Reconstruction and Editability [132.68255553099834]
We propose in-domain GAN inversion, which consists of a domain-guided domain-regularized and a encoder to regularize the inverted code in the native latent space of the pre-trained GAN model.
We make comprehensive analyses on the effects of the encoder structure, the starting inversion point, as well as the inversion parameter space, and observe the trade-off between the reconstruction quality and the editing property.
arXiv Detail & Related papers (2023-09-25T08:42:06Z) - Neural Image Compression Using Masked Sparse Visual Representation [17.229601298529825]
We study neural image compression based on the Sparse Visual Representation (SVR), where images are embedded into a discrete latent space spanned by learned visual codebooks.
By sharing codebooks with the decoder, the encoder transfers codeword indices that are efficient and cross-platform robust.
We propose a Masked Adaptive Codebook learning (M-AdaCode) method that applies masks to the latent feature subspace to balance and reconstruction quality.
arXiv Detail & Related papers (2023-09-20T21:59:23Z) - VNVC: A Versatile Neural Video Coding Framework for Efficient
Human-Machine Vision [59.632286735304156]
It is more efficient to enhance/analyze the coded representations directly without decoding them into pixels.
We propose a versatile neural video coding (VNVC) framework, which targets learning compact representations to support both reconstruction and direct enhancement/analysis.
arXiv Detail & Related papers (2023-06-19T03:04:57Z) - PINs: Progressive Implicit Networks for Multi-Scale Neural
Representations [68.73195473089324]
We propose a progressive positional encoding, exposing a hierarchical structure to incremental sets of frequency encodings.
Our model accurately reconstructs scenes with wide frequency bands and learns a scene representation at progressive level of detail.
Experiments on several 2D and 3D datasets show improvements in reconstruction accuracy, representational capacity and training speed compared to baselines.
arXiv Detail & Related papers (2022-02-09T20:33:37Z) - Reducing Redundancy in the Bottleneck Representation of the Autoencoders [98.78384185493624]
Autoencoders are a type of unsupervised neural networks, which can be used to solve various tasks.
We propose a scheme to explicitly penalize feature redundancies in the bottleneck representation.
We tested our approach across different tasks: dimensionality reduction using three different dataset, image compression using the MNIST dataset, and image denoising using fashion MNIST.
arXiv Detail & Related papers (2022-02-09T18:48:02Z) - Spectral Compressive Imaging Reconstruction Using Convolution and
Contextual Transformer [6.929652454131988]
We propose a hybrid network module, namely CCoT (Contextual Transformer) block, which can acquire the inductive bias ability of transformer simultaneously.
We integrate the proposed CCoT block into deep unfolding framework based on the generalized alternating projection algorithm, and further propose the GAP-CT network.
arXiv Detail & Related papers (2022-01-15T06:30:03Z) - Deep Video Coding with Dual-Path Generative Adversarial Network [39.19042551896408]
This paper proposes an efficient codecs namely dual-path generative adversarial network-based video (DGVC)
Our DGVC reduces the average bit-per-pixel (bpp) by 39.39%/54.92% at the same PSNR/MS-SSIM.
arXiv Detail & Related papers (2021-11-29T11:39:28Z) - Decomposition, Compression, and Synthesis (DCS)-based Video Coding: A
Neural Exploration via Resolution-Adaptive Learning [30.54722074562783]
We decompose the input video into respective spatial texture frames (STF) at its native spatial resolution.
Then, we compress them together using any popular video coder.
Finally, we synthesize decoded STFs and TMFs for high-quality video reconstruction at the same resolution as its native input.
arXiv Detail & Related papers (2020-12-01T17:23:53Z) - End-to-End JPEG Decoding and Artifacts Suppression Using Heterogeneous
Residual Convolutional Neural Network [0.0]
Existing deep learning models separate JPEG artifacts suppression from the decoding protocol as independent task.
We take one step forward to design a true end-to-end heterogeneous residual convolutional neural network (HR-CNN) with spectrum decomposition and heterogeneous reconstruction mechanism.
arXiv Detail & Related papers (2020-07-01T17:44:00Z) - Modeling Lost Information in Lossy Image Compression [72.69327382643549]
Lossy image compression is one of the most commonly used operators for digital images.
We propose a novel invertible framework called Invertible Lossy Compression (ILC) to largely mitigate the information loss problem.
arXiv Detail & Related papers (2020-06-22T04:04:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.