Robust Dual-Graph Regularized Moving Object Detection
- URL: http://arxiv.org/abs/2204.11939v1
- Date: Mon, 25 Apr 2022 19:40:01 GMT
- Title: Robust Dual-Graph Regularized Moving Object Detection
- Authors: Jing Qin, Ruilong Shen, Ruihan Zhu and Biyun Xie
- Abstract summary: Moving object detection and its associated background-foreground separation have been widely used in a lot of applications.
We propose a robust dual-graph regularized moving object detection model based on the weighted nuclear norm regularization.
- Score: 11.487964611698933
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Moving object detection and its associated background-foreground separation
have been widely used in a lot of applications, including computer vision,
transportation and surveillance. Due to the presence of the static background,
a video can be naturally decomposed into a low-rank background and a sparse
foreground. Many regularization techniques, such as matrix nuclear norm, have
been imposed on the background. In the meanwhile, sparsity or smoothness based
regularizations, such as total variation and $\ell_1$, can be imposed on the
foreground. Moreover, graph Laplacians are further imposed to capture the
complicated geometry of background images. Recently, weighted regularization
techniques including the weighted nuclear norm regularization have been
proposed in the image processing community to promote adaptive sparsity while
achieving efficient performance. In this paper, we propose a robust dual-graph
regularized moving object detection model based on the weighted nuclear norm
regularization, which is solved by the alternating direction method of
multipliers (ADMM). Numerical experiments on body movement data sets have
demonstrated the effectiveness of this method in separating moving objects from
background, and the great potential in robotic applications.
Related papers
- Learning Spatial-Temporal Regularized Tensor Sparse RPCA for Background
Subtraction [6.825970634402847]
We present a spatial-temporal regularized tensor sparse RPCA algorithm for precise background subtraction.
Experiments are performed on six publicly available background subtraction datasets.
arXiv Detail & Related papers (2023-09-27T11:21:31Z) - View Consistent Purification for Accurate Cross-View Localization [59.48131378244399]
This paper proposes a fine-grained self-localization method for outdoor robotics.
The proposed method addresses limitations in existing cross-view localization methods.
It is the first sparse visual-only method that enhances perception in dynamic environments.
arXiv Detail & Related papers (2023-08-16T02:51:52Z) - Human Motion Detection Based on Dual-Graph and Weighted Nuclear Norm
Regularizations [15.253015329378286]
We propose a robust dual graph regularized moving object detection model based on a novel weighted nuclear norm regularization andtemporal graphtemporal Laplacians.
Experiments on realistic human motion data sets have demonstrated the robustness and effectiveness of this approach in separating moving objects from background, and the enormous potential in robotic applications.
arXiv Detail & Related papers (2023-04-10T21:58:39Z) - Geometric-aware Pretraining for Vision-centric 3D Object Detection [77.7979088689944]
We propose a novel geometric-aware pretraining framework called GAPretrain.
GAPretrain serves as a plug-and-play solution that can be flexibly applied to multiple state-of-the-art detectors.
We achieve 46.2 mAP and 55.5 NDS on the nuScenes val set using the BEVFormer method, with a gain of 2.7 and 2.1 points, respectively.
arXiv Detail & Related papers (2023-04-06T14:33:05Z) - Dyna-DepthFormer: Multi-frame Transformer for Self-Supervised Depth
Estimation in Dynamic Scenes [19.810725397641406]
We propose a novel Dyna-Depthformer framework, which predicts scene depth and 3D motion field jointly.
Our contributions are two-fold. First, we leverage multi-view correlation through a series of self- and cross-attention layers in order to obtain enhanced depth feature representation.
Second, we propose a warping-based Motion Network to estimate the motion field of dynamic objects without using semantic prior.
arXiv Detail & Related papers (2023-01-14T09:43:23Z) - Pixelated Reconstruction of Foreground Density and Background Surface
Brightness in Gravitational Lensing Systems using Recurrent Inference
Machines [116.33694183176617]
We use a neural network based on the Recurrent Inference Machine to reconstruct an undistorted image of the background source and the lens mass density distribution as pixelated maps.
When compared to more traditional parametric models, the proposed method is significantly more expressive and can reconstruct complex mass distributions.
arXiv Detail & Related papers (2023-01-10T19:00:12Z) - ParticleSfM: Exploiting Dense Point Trajectories for Localizing Moving
Cameras in the Wild [57.37891682117178]
We present a robust dense indirect structure-from-motion method for videos that is based on dense correspondence from pairwise optical flow.
A novel neural network architecture is proposed for processing irregular point trajectory data.
Experiments on MPI Sintel dataset show that our system produces significantly more accurate camera trajectories.
arXiv Detail & Related papers (2022-07-19T09:19:45Z) - Object-centric and memory-guided normality reconstruction for video
anomaly detection [56.64792194894702]
This paper addresses anomaly detection problem for videosurveillance.
Due to the inherent rarity and heterogeneity of abnormal events, the problem is viewed as a normality modeling strategy.
Our model learns object-centric normal patterns without seeing anomalous samples during training.
arXiv Detail & Related papers (2022-03-07T19:28:39Z) - Dynamic Background Subtraction by Generative Neural Networks [8.75682288556859]
We have proposed a new background subtraction method called DBSGen.
It uses two generative neural networks, one for dynamic motion removal and another for background generation.
The proposed method has a unified framework that can be optimized in an end-to-end and unsupervised fashion.
arXiv Detail & Related papers (2022-02-10T21:29:10Z) - Regularity Learning via Explicit Distribution Modeling for Skeletal
Video Anomaly Detection [43.004613173363566]
A novel Motion Embedder (ME) is proposed to provide a pose motion representation from the probability perspective.
A novel task-specific Spatial-Temporal Transformer (STT) is deployed for self-supervised pose sequence reconstruction.
MoPRL achieves the state-of-the-art performance by an average improvement of 4.7% AUC on several challenging datasets.
arXiv Detail & Related papers (2021-12-07T11:52:25Z) - Leveraging Spatial and Photometric Context for Calibrated Non-Lambertian
Photometric Stereo [61.6260594326246]
We introduce an efficient fully-convolutional architecture that can leverage both spatial and photometric context simultaneously.
Using separable 4D convolutions and 2D heat-maps reduces the size and makes more efficient.
arXiv Detail & Related papers (2021-03-22T18:06:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.