Co-occurrence Background Model with Superpixels for Robust Background
Initialization
- URL: http://arxiv.org/abs/2003.12931v1
- Date: Sun, 29 Mar 2020 02:48:41 GMT
- Title: Co-occurrence Background Model with Superpixels for Robust Background
Initialization
- Authors: Wenjun Zhou, Yuheng Deng, Bo Peng, Dong Liang and Shun'ichi Kaneko
- Abstract summary: We develop a co-occurrence background model with superpixel segmentation.
Results obtained from the dataset of the challenging benchmark(SBMnet)validate it's performance under various challenges.
- Score: 10.955692396874678
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Background initialization is an important step in many high-level
applications of video processing,ranging from video surveillance to video
inpainting.However,this process is often affected by practical challenges such
as illumination changes,background motion,camera jitter and intermittent
movement,etc.In this paper,we develop a co-occurrence background model with
superpixel segmentation for robust background initialization. We first
introduce a novel co-occurrence background modeling method called as
Co-occurrence Pixel-Block Pairs(CPB)to generate a reliable initial background
model,and the superpixel segmentation is utilized to further acquire the
spatial texture Information of foreground and background.Then,the initial
background can be determined by combining the foreground extraction results
with the superpixel segmentation information.Experimental results obtained from
the dataset of the challenging benchmark(SBMnet)validate it's performance under
various challenges.
Related papers
- TKG-DM: Training-free Chroma Key Content Generation Diffusion Model [9.939293311550655]
Training-Free Chroma Key Content Generation Diffusion Model (TKG-DM)
We present a novel Training-Free Chroma Key Content Generation Diffusion Model (TKG-DM)
Our proposed method is the first to explore the manipulation of the color aspects in initial noise for controlled background generation.
arXiv Detail & Related papers (2024-11-23T15:07:15Z) - DART: Depth-Enhanced Accurate and Real-Time Background Matting [11.78381754863757]
Matting with a static background, often referred to as Background Matting" (BGM), has garnered significant attention within the computer vision community.
We leverage the rich depth information provided by the RGB-Depth (RGB-D) cameras to enhance background matting performance in real-time.
arXiv Detail & Related papers (2024-02-24T14:10:17Z) - Pixel-Inconsistency Modeling for Image Manipulation Localization [59.968362815126326]
Digital image forensics plays a crucial role in image authentication and manipulation localization.
This paper presents a generalized and robust manipulation localization model through the analysis of pixel inconsistency artifacts.
Experiments show that our method successfully extracts inherent pixel-inconsistency forgery fingerprints.
arXiv Detail & Related papers (2023-09-30T02:54:51Z) - Autoencoder-based background reconstruction and foreground segmentation
with background noise estimation [1.3706331473063877]
We propose in this paper to model the background of a video sequence as a low dimensional manifold using an autoencoder.
The main novelty of the proposed model is that the autoencoder is also trained to predict the background noise, which allows to compute for each frame a pixel-dependent threshold.
Although the proposed model does not use any temporal or motion information, it exceeds the state of the art for unsupervised background subtraction on the CDnet 2014 and LASIESTA datasets.
arXiv Detail & Related papers (2021-12-15T09:51:00Z) - Saliency Enhancement using Superpixel Similarity [77.34726150561087]
Saliency Object Detection (SOD) has several applications in image analysis.
Deep-learning-based SOD methods are among the most effective, but they may miss foreground parts with similar colors.
We introduce a post-processing method, named textitSaliency Enhancement over Superpixel Similarity (SESS)
We demonstrate that SESS can consistently and considerably improve the results of three deep-learning-based SOD methods on five image datasets.
arXiv Detail & Related papers (2021-12-01T17:22:54Z) - rSVDdpd: A Robust Scalable Video Surveillance Background Modelling
Algorithm [13.535770763481905]
We present a new video surveillance background modelling algorithm based on a new robust singular value decomposition technique rSVDdpd.
We also demonstrate the superiority of our proposed algorithm on a benchmark dataset and a new real-life video surveillance dataset in the presence of camera tampering.
arXiv Detail & Related papers (2021-09-22T12:20:44Z) - DeepMultiCap: Performance Capture of Multiple Characters Using Sparse
Multiview Cameras [63.186486240525554]
DeepMultiCap is a novel method for multi-person performance capture using sparse multi-view cameras.
Our method can capture time varying surface details without the need of using pre-scanned template models.
arXiv Detail & Related papers (2021-05-01T14:32:13Z) - Learning Spatial and Spatio-Temporal Pixel Aggregations for Image and
Video Denoising [104.59305271099967]
We present a pixel aggregation network and learn the pixel sampling and averaging strategies for image denoising.
We develop a pixel aggregation network for video denoising to sample pixels across the spatial-temporal space.
Our method is able to solve the misalignment issues caused by large motion in dynamic scenes.
arXiv Detail & Related papers (2021-01-26T13:00:46Z) - PerceptionGAN: Real-world Image Construction from Provided Text through
Perceptual Understanding [11.985768957782641]
We propose a method to provide good images by incorporating perceptual understanding in the discriminator module.
We show that the perceptual information included in the initial image is improved while modeling image distribution at multiple stages.
More importantly, the proposed method can be integrated into the pipeline of other state-of-the-art text-based-image-generation models.
arXiv Detail & Related papers (2020-07-02T09:23:08Z) - The Power of Triply Complementary Priors for Image Compressive Sensing [89.14144796591685]
We propose a joint low-rank deep (LRD) image model, which contains a pair of complementaryly trip priors.
We then propose a novel hybrid plug-and-play framework based on the LRD model for image CS.
To make the optimization tractable, a simple yet effective algorithm is proposed to solve the proposed H-based image CS problem.
arXiv Detail & Related papers (2020-05-16T08:17:44Z) - Deep Blind Video Super-resolution [85.79696784460887]
We propose a deep convolutional neural network (CNN) model to solve video SR by a blur kernel modeling approach.
The proposed CNN model consists of motion blur estimation, motion estimation, and latent image restoration modules.
We show that the proposed algorithm is able to generate clearer images with finer structural details.
arXiv Detail & Related papers (2020-03-10T13:43:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.