Dynamic Background Subtraction by Generative Neural Networks
- URL: http://arxiv.org/abs/2202.05336v1
- Date: Thu, 10 Feb 2022 21:29:10 GMT
- Title: Dynamic Background Subtraction by Generative Neural Networks
- Authors: Fateme Bahri and Nilanjan Ray
- Abstract summary: We have proposed a new background subtraction method called DBSGen.
It uses two generative neural networks, one for dynamic motion removal and another for background generation.
The proposed method has a unified framework that can be optimized in an end-to-end and unsupervised fashion.
- Score: 8.75682288556859
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Background subtraction is a significant task in computer vision and an
essential step for many real world applications. One of the challenges for
background subtraction methods is dynamic background, which constitute
stochastic movements in some parts of the background. In this paper, we have
proposed a new background subtraction method, called DBSGen, which uses two
generative neural networks, one for dynamic motion removal and another for
background generation. At the end, the foreground moving objects are obtained
by a pixel-wise distance threshold based on a dynamic entropy map. The proposed
method has a unified framework that can be optimized in an end-to-end and
unsupervised fashion. The performance of the method is evaluated over dynamic
background sequences and it outperforms most of state-of-the-art methods. Our
code is publicly available at https://github.com/FatemeBahri/DBSGen.
Related papers
- MonST3R: A Simple Approach for Estimating Geometry in the Presence of Motion [118.74385965694694]
We present Motion DUSt3R (MonST3R), a novel geometry-first approach that directly estimates per-timestep geometry from dynamic scenes.
By simply estimating a pointmap for each timestep, we can effectively adapt DUST3R's representation, previously only used for static scenes, to dynamic scenes.
We show that by posing the problem as a fine-tuning task, identifying several suitable datasets, and strategically training the model on this limited data, we can surprisingly enable the model to handle dynamics.
arXiv Detail & Related papers (2024-10-04T18:00:07Z) - Weakly Supervised Realtime Dynamic Background Subtraction [8.75682288556859]
We propose a weakly supervised framework that can perform background subtraction without requiring per-pixel ground-truth labels.
Our framework is trained on a moving object-free sequence of images and comprises two networks.
Our proposed method is online, real-time, efficient, and requires minimal frame-level annotation.
arXiv Detail & Related papers (2023-03-06T03:17:48Z) - Dyna-DepthFormer: Multi-frame Transformer for Self-Supervised Depth
Estimation in Dynamic Scenes [19.810725397641406]
We propose a novel Dyna-Depthformer framework, which predicts scene depth and 3D motion field jointly.
Our contributions are two-fold. First, we leverage multi-view correlation through a series of self- and cross-attention layers in order to obtain enhanced depth feature representation.
Second, we propose a warping-based Motion Network to estimate the motion field of dynamic objects without using semantic prior.
arXiv Detail & Related papers (2023-01-14T09:43:23Z) - Neural Motion Fields: Encoding Grasp Trajectories as Implicit Value
Functions [65.84090965167535]
We present Neural Motion Fields, a novel object representation which encodes both object point clouds and the relative task trajectories as an implicit value function parameterized by a neural network.
This object-centric representation models a continuous distribution over the SE(3) space and allows us to perform grasping reactively by leveraging sampling-based MPC to optimize this value function.
arXiv Detail & Related papers (2022-06-29T18:47:05Z) - Neural Maximum A Posteriori Estimation on Unpaired Data for Motion
Deblurring [87.97330195531029]
We propose a Neural Maximum A Posteriori (NeurMAP) estimation framework for training neural networks to recover blind motion information and sharp content from unpaired data.
The proposed NeurMAP is an approach to existing deblurring neural networks, and is the first framework that enables training image deblurring networks on unpaired datasets.
arXiv Detail & Related papers (2022-04-26T08:09:47Z) - Robust Dual-Graph Regularized Moving Object Detection [11.487964611698933]
Moving object detection and its associated background-foreground separation have been widely used in a lot of applications.
We propose a robust dual-graph regularized moving object detection model based on the weighted nuclear norm regularization.
arXiv Detail & Related papers (2022-04-25T19:40:01Z) - NeuralReshaper: Single-image Human-body Retouching with Deep Neural
Networks [50.40798258968408]
We present NeuralReshaper, a novel method for semantic reshaping of human bodies in single images using deep generative networks.
Our approach follows a fit-then-reshape pipeline, which first fits a parametric 3D human model to a source human image.
To deal with the lack-of-data problem that no paired data exist, we introduce a novel self-supervised strategy to train our network.
arXiv Detail & Related papers (2022-03-20T09:02:13Z) - NeuralDiff: Segmenting 3D objects that move in egocentric videos [92.95176458079047]
We study the problem of decomposing the observed 3D scene into a static background and a dynamic foreground.
This task is reminiscent of the classic background subtraction problem, but is significantly harder because all parts of the scene, static and dynamic, generate a large apparent motion.
In particular, we consider egocentric videos and further separate the dynamic component into objects and the actor that observes and moves them.
arXiv Detail & Related papers (2021-10-19T12:51:35Z) - A Deep-Unfolded Reference-Based RPCA Network For Video
Foreground-Background Separation [86.35434065681925]
This paper proposes a new deep-unfolding-based network design for the problem of Robust Principal Component Analysis (RPCA)
Unlike existing designs, our approach focuses on modeling the temporal correlation between the sparse representations of consecutive video frames.
Experimentation using the moving MNIST dataset shows that the proposed network outperforms a recently proposed state-of-the-art RPCA network in the task of video foreground-background separation.
arXiv Detail & Related papers (2020-10-02T11:40:09Z) - Learning-based Tracking of Fast Moving Objects [8.8456602191903]
Tracking fast moving objects, which appear as blurred streaks in video sequences, is a difficult task for standard trackers.
We present a tracking-by-segmentation approach implemented using state-of-the-art deep learning methods that performs near-realtime tracking on real-world video sequences.
arXiv Detail & Related papers (2020-05-04T19:20:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.