Do Not Deceive Your Employer with a Virtual Background: A Video
Conferencing Manipulation-Detection System
- URL: http://arxiv.org/abs/2106.15130v1
- Date: Tue, 29 Jun 2021 07:31:21 GMT
- Title: Do Not Deceive Your Employer with a Virtual Background: A Video
Conferencing Manipulation-Detection System
- Authors: Mauro Conti, Simone Milani, Ehsan Nowroozi, Gabriele Orazi
- Abstract summary: We study the feasibility of an efficient tool to detect whether a videoconferencing user background is real.
Our experiments confirm that cross co-occurrences matrices improve the robustness of the detector against different kinds of attacks.
- Score: 35.82676654231492
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The last-generation video conferencing software allows users to utilize a
virtual background to conceal their personal environment due to privacy
concerns, especially in official meetings with other employers. On the other
hand, users maybe want to fool people in the meeting by considering the virtual
background to conceal where they are. In this case, developing tools to
understand the virtual background utilize for fooling people in meeting plays
an important role. Besides, such detectors must prove robust against different
kinds of attacks since a malicious user can fool the detector by applying a set
of adversarial editing steps on the video to conceal any revealing footprint.
In this paper, we study the feasibility of an efficient tool to detect whether
a videoconferencing user background is real. In particular, we provide the
first tool which computes pixel co-occurrences matrices and uses them to search
for inconsistencies among spectral and spatial bands. Our experiments confirm
that cross co-occurrences matrices improve the robustness of the detector
against different kinds of attacks. This work's performance is especially
noteworthy with regard to color SPAM features. Moreover, the performance
especially is significant with regard to robustness versus post-processing,
like geometric transformations, filtering, contrast enhancement, and JPEG
compression with different quality factors.
Related papers
- Trade-offs in Privacy-Preserving Eye Tracking through Iris Obfuscation: A Benchmarking Study [44.44776028287441]
We benchmark blurring, noising, downsampling, rubber sheet model, and iris style transfer to obfuscate user identity.
Our experiments show that canonical image processing methods like blurring and noising cause a marginal impact on deep learning-based tasks.
While downsampling, rubber sheet model, and iris style transfer are effective in hiding user identifiers, iris style transfer, with higher computation cost, outperforms others in both utility tasks.
arXiv Detail & Related papers (2025-04-14T14:29:38Z) - Exploring Audio Editing Features as User-Centric Privacy Defenses Against Large Language Model(LLM) Based Emotion Inference Attacks [0.0]
Existing privacy-preserving methods compromise usability and security, limiting their adoption in practical scenarios.
This paper introduces a novel, user-centric approach that leverages familiar audio editing techniques, specifically pitch and tempo manipulation, to protect emotional privacy without sacrificing usability.
Our experiments, conducted on three distinct datasets, demonstrate that pitch and tempo manipulation effectively obfuscates emotional data.
arXiv Detail & Related papers (2025-01-30T20:07:44Z) - Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Deepfake detection in videos with multiple faces using geometric-fakeness features [79.16635054977068]
Deepfakes of victims or public figures can be used by fraudsters for blackmailing, extorsion and financial fraud.
In our research we propose to use geometric-fakeness features (GFF) that characterize a dynamic degree of a face presence in a video.
We employ our approach to analyze videos with multiple faces that are simultaneously present in a video.
arXiv Detail & Related papers (2024-10-10T13:10:34Z) - OPOM: Customized Invisible Cloak towards Face Privacy Protection [58.07786010689529]
We investigate the face privacy protection from a technology standpoint based on a new type of customized cloak.
We propose a new method, named one person one mask (OPOM), to generate person-specific (class-wise) universal masks.
The effectiveness of the proposed method is evaluated on both common and celebrity datasets.
arXiv Detail & Related papers (2022-05-24T11:29:37Z) - Real or Virtual: A Video Conferencing Background Manipulation-Detection
System [25.94894351460089]
We present a detection strategy to distinguish between real and virtual video conferencing user backgrounds.
We demonstrate the robustness of our detector against different adversarial attacks that the adversary considers.
Our performance results show that we can perfectly identify a real from a virtual background with an accuracy of 99.80%.
arXiv Detail & Related papers (2022-04-25T08:14:11Z) - Audio-Visual Person-of-Interest DeepFake Detection [77.04789677645682]
The aim of this work is to propose a deepfake detector that can cope with the wide variety of manipulation methods and scenarios encountered in the real world.
We leverage a contrastive learning paradigm to learn the moving-face and audio segment embeddings that are most discriminative for each identity.
Our method can detect both single-modality (audio-only, video-only) and multi-modality (audio-video) attacks, and is robust to low-quality or corrupted videos.
arXiv Detail & Related papers (2022-04-06T20:51:40Z) - Privacy Aware Person Detection in Surveillance Data [4.727475863373813]
Crowd management relies on inspection of surveillance video either by operators or by object detection models.
transferring video from the camera to remote infrastructure may open the door for extracting additional information that are infringements of privacy.
In this paper, we use adversarial training to obtain a lightweight obfuscator that transforms video frames to only retain the necessary information for person detection.
arXiv Detail & Related papers (2021-10-28T14:49:21Z) - Privacy-Preserving Video Classification with Convolutional Neural
Networks [8.51142156817993]
We propose a privacy-preserving implementation of single-frame method based video classification with convolutional neural networks.
We evaluate our proposed solution in an application for private human emotion recognition.
arXiv Detail & Related papers (2021-02-06T05:05:31Z) - FakeBuster: A DeepFakes Detection Tool for Video Conferencing Scenarios [9.10316289334594]
This paper proposes a new DeepFake detector FakeBuster for detecting impostors during video conferencing and manipulated faces on social media.
FakeBuster is a standalone deep learning based solution, which enables a user to detect if another person's video is manipulated or spoofed during a video conferencing based meeting.
arXiv Detail & Related papers (2021-01-09T09:06:08Z) - VideoForensicsHQ: Detecting High-quality Manipulated Face Videos [77.60295082172098]
We show how the performance of forgery detectors depends on the presence of artefacts that the human eye can see.
We introduce a new benchmark dataset for face video forgery detection, of unprecedented quality.
arXiv Detail & Related papers (2020-05-20T21:17:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.