Real or Virtual: A Video Conferencing Background Manipulation-Detection
System
- URL: http://arxiv.org/abs/2204.11853v1
- Date: Mon, 25 Apr 2022 08:14:11 GMT
- Title: Real or Virtual: A Video Conferencing Background Manipulation-Detection
System
- Authors: Ehsan Nowroozi, Yassine Mekdad, Mauro Conti, Simone Milani, Selcuk
Uluagac and Berrin Yanikoglu
- Abstract summary: We present a detection strategy to distinguish between real and virtual video conferencing user backgrounds.
We demonstrate the robustness of our detector against different adversarial attacks that the adversary considers.
Our performance results show that we can perfectly identify a real from a virtual background with an accuracy of 99.80%.
- Score: 25.94894351460089
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, the popularity and wide use of the last-generation video
conferencing technologies created an exponential growth in its market size.
Such technology allows participants in different geographic regions to have a
virtual face-to-face meeting. Additionally, it enables users to employ a
virtual background to conceal their own environment due to privacy concerns or
to reduce distractions, particularly in professional settings. Nevertheless, in
scenarios where the users should not hide their actual locations, they may
mislead other participants by claiming their virtual background as a real one.
Therefore, it is crucial to develop tools and strategies to detect the
authenticity of the considered virtual background. In this paper, we present a
detection strategy to distinguish between real and virtual video conferencing
user backgrounds. We demonstrate that our detector is robust against two attack
scenarios. The first scenario considers the case where the detector is unaware
about the attacks and inn the second scenario, we make the detector aware of
the adversarial attacks, which we refer to Adversarial Multimedia Forensics
(i.e, the forensically-edited frames are included in the training set). Given
the lack of publicly available dataset of virtual and real backgrounds for
video conferencing, we created our own dataset and made them publicly available
[1]. Then, we demonstrate the robustness of our detector against different
adversarial attacks that the adversary considers. Ultimately, our detector's
performance is significant against the CRSPAM1372 [2] features, and
post-processing operations such as geometric transformations with different
quality factors that the attacker may choose. Moreover, our performance results
shows that we can perfectly identify a real from a virtual background with an
accuracy of 99.80%.
Related papers
- Exploring Audio Editing Features as User-Centric Privacy Defenses Against Large Language Model(LLM) Based Emotion Inference Attacks [0.0]
Existing privacy-preserving methods compromise usability and security, limiting their adoption in practical scenarios.
This paper introduces a novel, user-centric approach that leverages familiar audio editing techniques, specifically pitch and tempo manipulation, to protect emotional privacy without sacrificing usability.
Our experiments, conducted on three distinct datasets, demonstrate that pitch and tempo manipulation effectively obfuscates emotional data.
arXiv Detail & Related papers (2025-01-30T20:07:44Z) - Tracking Virtual Meetings in the Wild: Re-identification in Multi-Participant Virtual Meetings [0.0]
We introduce a novel approach to track and re-identify participants in remote video meetings.
Our approach reduces the error rate by 95% on average compared to YOLO-based tracking methods as a baseline.
arXiv Detail & Related papers (2024-09-15T19:37:37Z) - Privacy-Preserving Gaze Data Streaming in Immersive Interactive Virtual Reality: Robustness and User Experience [11.130411904676095]
Eye tracking data, if exposed, can be used for re-identification attacks.
We develop a methodology to evaluate real-time privacy mechanisms for interactive VR applications.
arXiv Detail & Related papers (2024-02-12T14:53:12Z) - Synthetic-To-Real Video Person Re-ID [57.937189569211505]
Person re-identification (Re-ID) is an important task and has significant applications for public security and information forensics.
We investigate a novel and challenging setting of Re-ID, i.e., cross-domain video-based person Re-ID.
We utilize synthetic video datasets as the source domain for training and real-world videos for testing.
arXiv Detail & Related papers (2024-02-03T10:19:21Z) - Deep Motion Masking for Secure, Usable, and Scalable Real-Time Anonymization of Virtual Reality Motion Data [49.68609500290361]
Recent studies have demonstrated that the motion tracking "telemetry" data used by nearly all VR applications is as uniquely identifiable as a fingerprint scan.
We present in this paper a state-of-the-art VR identification model that can convincingly bypass known defensive countermeasures.
arXiv Detail & Related papers (2023-11-09T01:34:22Z) - Can Virtual Reality Protect Users from Keystroke Inference Attacks? [23.587497604556823]
We show that despite assumptions of enhanced privacy, VR is unable to shield its users from side-channel attacks that steal private information.
This vulnerability arises from VR's greatest strength, its immersive and interactive nature.
arXiv Detail & Related papers (2023-10-24T21:19:38Z) - Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - NPVForensics: Jointing Non-critical Phonemes and Visemes for Deepfake
Detection [50.33525966541906]
Existing multimodal detection methods capture audio-visual inconsistencies to expose Deepfake videos.
We propose a novel Deepfake detection method to mine the correlation between Non-critical Phonemes and Visemes, termed NPVForensics.
Our model can be easily adapted to the downstream Deepfake datasets with fine-tuning.
arXiv Detail & Related papers (2023-06-12T06:06:05Z) - Audio-Visual Person-of-Interest DeepFake Detection [77.04789677645682]
The aim of this work is to propose a deepfake detector that can cope with the wide variety of manipulation methods and scenarios encountered in the real world.
We leverage a contrastive learning paradigm to learn the moving-face and audio segment embeddings that are most discriminative for each identity.
Our method can detect both single-modality (audio-only, video-only) and multi-modality (audio-video) attacks, and is robust to low-quality or corrupted videos.
arXiv Detail & Related papers (2022-04-06T20:51:40Z) - Leveraging Real Talking Faces via Self-Supervision for Robust Forgery
Detection [112.96004727646115]
We develop a method to detect face-manipulated videos using real talking faces.
We show that our method achieves state-of-the-art performance on cross-manipulation generalisation and robustness experiments.
Our results suggest that leveraging natural and unlabelled videos is a promising direction for the development of more robust face forgery detectors.
arXiv Detail & Related papers (2022-01-18T17:14:54Z) - Federated Test-Time Adaptive Face Presentation Attack Detection with
Dual-Phase Privacy Preservation [100.69458267888962]
Face presentation attack detection (fPAD) plays a critical role in the modern face recognition pipeline.
Due to legal and privacy issues, training data (real face images and spoof images) are not allowed to be directly shared between different data sources.
We propose a Federated Test-Time Adaptive Face Presentation Attack Detection with Dual-Phase Privacy Preservation framework.
arXiv Detail & Related papers (2021-10-25T02:51:05Z) - Do Not Deceive Your Employer with a Virtual Background: A Video
Conferencing Manipulation-Detection System [35.82676654231492]
We study the feasibility of an efficient tool to detect whether a videoconferencing user background is real.
Our experiments confirm that cross co-occurrences matrices improve the robustness of the detector against different kinds of attacks.
arXiv Detail & Related papers (2021-06-29T07:31:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.