Personal Privacy Protection via Irrelevant Faces Tracking and Pixelation
in Video Live Streaming
- URL: http://arxiv.org/abs/2101.01060v2
- Date: Tue, 5 Jan 2021 14:01:09 GMT
- Title: Personal Privacy Protection via Irrelevant Faces Tracking and Pixelation
in Video Live Streaming
- Authors: Jizhe Zhou, Chi-Man Pun
- Abstract summary: We develop a new method called Face Pixelation in Video Live Streaming to generate automatic personal privacy filtering.
For fast and accurate pixelation of irrelevant people's faces, FPVLS is organized in a frame-to-video structure of two core stages.
On the video live streaming dataset we collected, FPVLS obtains satisfying accuracy, real-time efficiency, and contains the over-pixelation problems.
- Score: 61.145467627057194
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To date, the privacy-protection intended pixelation tasks are still
labor-intensive and yet to be studied. With the prevailing of video live
streaming, establishing an online face pixelation mechanism during streaming is
an urgency. In this paper, we develop a new method called Face Pixelation in
Video Live Streaming (FPVLS) to generate automatic personal privacy filtering
during unconstrained streaming activities. Simply applying multi-face trackers
will encounter problems in target drifting, computing efficiency, and
over-pixelation. Therefore, for fast and accurate pixelation of irrelevant
people's faces, FPVLS is organized in a frame-to-video structure of two core
stages. On individual frames, FPVLS utilizes image-based face detection and
embedding networks to yield face vectors. In the raw trajectories generation
stage, the proposed Positioned Incremental Affinity Propagation (PIAP)
clustering algorithm leverages face vectors and positioned information to
quickly associate the same person's faces across frames. Such frame-wise
accumulated raw trajectories are likely to be intermittent and unreliable on
video level. Hence, we further introduce the trajectory refinement stage that
merges a proposal network with the two-sample test based on the Empirical
Likelihood Ratio (ELR) statistic to refine the raw trajectories. A Gaussian
filter is laid on the refined trajectories for final pixelation. On the video
live streaming dataset we collected, FPVLS obtains satisfying accuracy,
real-time efficiency, and contains the over-pixelation problems.
Related papers
- STAC: Leveraging Spatio-Temporal Data Associations For Efficient
Cross-Camera Streaming and Analytics [0.0]
We propose an efficient cross-cameras surveillance system that provides real-time analytics and inference under constrained network environments.
We integrate STAC with frame filtering and state-of-the-art compression for streaming characteristics.
We evaluate the performance of STA using this dataset to measure the accuracy metrics and inference rate for completenessid.
arXiv Detail & Related papers (2024-01-27T04:02:52Z) - Aggregating Long-term Sharp Features via Hybrid Transformers for Video
Deblurring [76.54162653678871]
We propose a video deblurring method that leverages both neighboring frames and present sharp frames using hybrid Transformers for feature aggregation.
Our proposed method outperforms state-of-the-art video deblurring methods as well as event-driven video deblurring methods in terms of quantitative metrics and visual quality.
arXiv Detail & Related papers (2023-09-13T16:12:11Z) - Differentiable Frequency-based Disentanglement for Aerial Video Action
Recognition [56.91538445510214]
We present a learning algorithm for human activity recognition in videos.
Our approach is designed for UAV videos, which are mainly acquired from obliquely placed dynamic cameras.
We conduct extensive experiments on the UAV Human dataset and the NEC Drone dataset.
arXiv Detail & Related papers (2022-09-15T22:16:52Z) - Efficient Video Deblurring Guided by Motion Magnitude [37.25713728458234]
We propose a novel framework that utilizes the motion magnitude prior (MMP) as guidance for efficient deep video deblurring.
The MMP consists of both spatial and temporal blur level information, which can be further integrated into an efficient recurrent neural network (RNN) for video deblurring.
arXiv Detail & Related papers (2022-07-27T08:57:48Z) - Spatio-Temporal Deformable Attention Network for Video Deblurring [21.514099863308676]
The key success factor of the video deblurring methods is to compensate for the blurry pixels of the mid-frame with the sharp pixels of the adjacent video frames.
We propose STDANet, which extracts the information of sharp pixels by considering the pixel-wise blur levels of the video frames.
arXiv Detail & Related papers (2022-07-22T03:03:08Z) - Motion-blurred Video Interpolation and Extrapolation [72.3254384191509]
We present a novel framework for deblurring, interpolating and extrapolating sharp frames from a motion-blurred video in an end-to-end manner.
To ensure temporal coherence across predicted frames and address potential temporal ambiguity, we propose a simple, yet effective flow-based rule.
arXiv Detail & Related papers (2021-03-04T12:18:25Z) - Privacy-sensitive Objects Pixelation for Live Video Streaming [52.83247667841588]
We propose a novel Privacy-sensitive Objects Pixelation (PsOP) framework for automatic personal privacy filtering during live video streaming.
Our PsOP is extendable to any potential privacy-sensitive objects pixelation.
In addition to the pixelation accuracy boosting, experiments on the streaming video data we built show that the proposed PsOP can significantly reduce the over-pixelation ratio in privacy-sensitive object pixelation.
arXiv Detail & Related papers (2021-01-03T11:07:23Z) - FLAVR: Flow-Agnostic Video Representations for Fast Frame Interpolation [97.99012124785177]
FLAVR is a flexible and efficient architecture that uses 3D space-time convolutions to enable end-to-end learning and inference for video framesupervised.
We demonstrate that FLAVR can serve as a useful self- pretext task for action recognition, optical flow estimation, and motion magnification.
arXiv Detail & Related papers (2020-12-15T18:59:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.