A Deep-Unfolded Reference-Based RPCA Network For Video
Foreground-Background Separation
- URL: http://arxiv.org/abs/2010.00929v1
- Date: Fri, 2 Oct 2020 11:40:09 GMT
- Title: A Deep-Unfolded Reference-Based RPCA Network For Video
Foreground-Background Separation
- Authors: Huynh Van Luong, Boris Joukovsky, Yonina C. Eldar, Nikos Deligiannis
- Abstract summary: This paper proposes a new deep-unfolding-based network design for the problem of Robust Principal Component Analysis (RPCA)
Unlike existing designs, our approach focuses on modeling the temporal correlation between the sparse representations of consecutive video frames.
Experimentation using the moving MNIST dataset shows that the proposed network outperforms a recently proposed state-of-the-art RPCA network in the task of video foreground-background separation.
- Score: 86.35434065681925
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep unfolded neural networks are designed by unrolling the iterations of
optimization algorithms. They can be shown to achieve faster convergence and
higher accuracy than their optimization counterparts. This paper proposes a new
deep-unfolding-based network design for the problem of Robust Principal
Component Analysis (RPCA) with application to video foreground-background
separation. Unlike existing designs, our approach focuses on modeling the
temporal correlation between the sparse representations of consecutive video
frames. To this end, we perform the unfolding of an iterative algorithm for
solving reweighted $\ell_1$-$\ell_1$ minimization; this unfolding leads to a
different proximal operator (a.k.a. different activation function) adaptively
learned per neuron. Experimentation using the moving MNIST dataset shows that
the proposed network outperforms a recently proposed state-of-the-art RPCA
network in the task of video foreground-background separation.
Related papers
- Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution [91.3781512926942]
Image super-resolution (SR) has witnessed extensive neural network designs from CNN to transformer architectures.
This work investigates the potential of network pruning for super-resolution iteration to take advantage of off-the-shelf network designs and reduce the underlying computational overhead.
We propose a novel Iterative Soft Shrinkage-Percentage (ISS-P) method by optimizing the sparse structure of a randomly network at each and tweaking unimportant weights with a small amount proportional to the magnitude scale on-the-fly.
arXiv Detail & Related papers (2023-03-16T21:06:13Z) - Enhanced Correlation Matching based Video Frame Interpolation [5.304928339627251]
We propose a novel framework called the Enhanced Correlation Matching based Video Frame Interpolation Network.
The proposed scheme employs the recurrent pyramid architecture that shares the parameters among each pyramid layer for optical flow estimation.
Experiment results demonstrate that the proposed scheme outperforms the previous works at 4K video data and low-resolution benchmark datasets as well as in terms of objective and subjective quality.
arXiv Detail & Related papers (2021-11-17T02:43:45Z) - Ensemble Neural Representation Networks [10.405976966708744]
Implicit Neural Representation (INR) has attracted considerable attention for storing various types of signals in continuous forms.
We propose a novel sub-optimal ensemble architecture for INR that resolves the aforementioned problems.
We show that the performance of the proposed ensemble INR architecture may decrease if the dimensions of sub-networks increase.
arXiv Detail & Related papers (2021-10-07T12:49:21Z) - Improved CNN-based Learning of Interpolation Filters for Low-Complexity
Inter Prediction in Video Coding [5.46121027847413]
This paper introduces a novel explainable neural network-based inter-prediction scheme.
A novel training framework enables each network branch to resemble a specific fractional shift.
When implemented in the context of the Versatile Video Coding (VVC) test model, 0.77%, 1.27% and 2.25% BD-rate savings can be achieved.
arXiv Detail & Related papers (2021-06-16T16:48:01Z) - A Differential Game Theoretic Neural Optimizer for Training Residual
Networks [29.82841891919951]
We propose a generalized Differential Dynamic Programming (DDP) neural architecture that accepts both residual connections and convolution layers.
The resulting optimal control representation admits a gameoretic perspective, in which training residual networks can be interpreted as cooperative trajectory optimization on state-augmented systems.
arXiv Detail & Related papers (2020-07-17T10:19:17Z) - Iterative Network for Image Super-Resolution [69.07361550998318]
Single image super-resolution (SISR) has been greatly revitalized by the recent development of convolutional neural networks (CNN)
This paper provides a new insight on conventional SISR algorithm, and proposes a substantially different approach relying on the iterative optimization.
A novel iterative super-resolution network (ISRN) is proposed on top of the iterative optimization.
arXiv Detail & Related papers (2020-05-20T11:11:47Z) - Deep Adaptive Inference Networks for Single Image Super-Resolution [72.7304455761067]
Single image super-resolution (SISR) has witnessed tremendous progress in recent years owing to the deployment of deep convolutional neural networks (CNNs)
In this paper, we take a step forward to address this issue by leveraging the adaptive inference networks for deep SISR (AdaDSR)
Our AdaDSR involves an SISR model as backbone and a lightweight adapter module which takes image features and resource constraint as input and predicts a map of local network depth.
arXiv Detail & Related papers (2020-04-08T10:08:20Z) - Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio [101.84651388520584]
This paper presents a new framework named network adjustment, which considers network accuracy as a function of FLOPs.
Experiments on standard image classification datasets and a wide range of base networks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-06T15:51:00Z) - Interpretable Deep Recurrent Neural Networks via Unfolding Reweighted
$\ell_1$-$\ell_1$ Minimization: Architecture Design and Generalization
Analysis [19.706363403596196]
This paper develops a novel deep recurrent neural network (coined reweighted-RNN) by the unfolding of a reweighted minimization algorithm.
To the best of our knowledge, this is the first deep unfolding method that explores reweighted minimization.
The experimental results on the moving MNIST dataset demonstrate that the proposed deep reweighted-RNN significantly outperforms existing RNN models.
arXiv Detail & Related papers (2020-03-18T17:02:10Z) - MSE-Optimal Neural Network Initialization via Layer Fusion [68.72356718879428]
Deep neural networks achieve state-of-the-art performance for a range of classification and inference tasks.
The use of gradient combined nonvolutionity renders learning susceptible to novel problems.
We propose fusing neighboring layers of deeper networks that are trained with random variables.
arXiv Detail & Related papers (2020-01-28T18:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.