Designing Indicators to Combat Fake Media
- URL: http://arxiv.org/abs/2010.00544v1
- Date: Thu, 1 Oct 2020 16:58:12 GMT
- Title: Designing Indicators to Combat Fake Media
- Authors: Imani N. Sherman, Elissa M. Redmiles, Jack W. Stokes
- Abstract summary: This research designs and investigates the use of provenance indicators to help users identify fake videos.
We first interview users regarding their experiences with different misinformation modes.
Then, we conduct a participatory design study to develop and design fake video indicators.
- Score: 24.257090478689815
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The growth of misinformation technology necessitates the need to identify
fake videos. One approach to preventing the consumption of these fake videos is
provenance which allows the user to authenticate media content to its original
source. This research designs and investigates the use of provenance indicators
to help users identify fake videos. We first interview users regarding their
experiences with different misinformation modes (text, image, video) to guide
the design of indicators within users' existing perspectives. Then, we conduct
a participatory design study to develop and design fake video indicators.
Finally, we evaluate participant-designed indicators via both expert
evaluations and quantitative surveys with a large group of end-users. Our
results provide concrete design guidelines for the emerging issue of fake
videos. Our findings also raise concerns regarding users' tendency to
overgeneralize from misinformation warning messages, suggesting the need for
further research on warning design in the ongoing fight against misinformation.
Related papers
- How Unique is Whose Web Browser? The role of demographics in browser fingerprinting among US users [50.699390248359265]
Browser fingerprinting can be used to identify and track users across the Web, even without cookies.
This technique and resulting privacy risks have been studied for over a decade.
We provide a first-of-its-kind dataset to enable further research.
arXiv Detail & Related papers (2024-10-09T14:51:58Z) - What Matters in Explanations: Towards Explainable Fake Review Detection Focusing on Transformers [45.55363754551388]
Customers' reviews and feedback play crucial role on e-commerce platforms like Amazon, Zalando, and eBay.
There is a prevailing concern that sellers often post fake or spam reviews to deceive potential customers and manipulate their opinions about a product.
We propose an explainable framework for detecting fake reviews with high precision in identifying fraudulent content with explanations.
arXiv Detail & Related papers (2024-07-24T13:26:02Z) - The Tug-of-War Between Deepfake Generation and Detection [4.62070292702111]
Multimodal generative models are rapidly evolving, leading to a surge in the generation of realistic video and audio.
Deepfake videos, which can convincingly impersonate individuals, have particularly garnered attention due to their potential misuse.
This survey paper examines the dual landscape of deepfake video generation and detection, emphasizing the need for effective countermeasures.
arXiv Detail & Related papers (2024-07-08T17:49:41Z) - Viblio: Introducing Credibility Signals and Citations to Video-Sharing
Platforms [8.832571289776256]
Viblio is a prototype system that enables YouTube users to view and add citations while watching a video based on participants' needs.
From an evaluation with 12 people, all participants found Viblio to be intuitive and useful in the process of evaluating a video's credibility.
arXiv Detail & Related papers (2024-02-27T05:21:39Z) - A Data-Driven Approach for Finding Requirements Relevant Feedback from
TikTok and YouTube [37.87427796354001]
This study delves into the potential of TikTok and YouTube, two widely used social media platforms that focus on video content.
We evaluated the prospect of videos as a source of user feedback by analyzing audio and visual text, and metadata (i.e., description/title) from 6276 videos of 20 popular products across various industries.
We found that product ratings (feature, design, performance), bug reports, and usage tutorial are persistent themes from the videos.
arXiv Detail & Related papers (2023-05-02T21:47:06Z) - Fighting Malicious Media Data: A Survey on Tampering Detection and
Deepfake Detection [115.83992775004043]
Recent advances in deep learning, particularly deep generative models, open the doors for producing perceptually convincing images and videos at a low cost.
This paper provides a comprehensive review of the current media tampering detection approaches, and discusses the challenges and trends in this field for future research.
arXiv Detail & Related papers (2022-12-12T02:54:08Z) - Show Me What I Like: Detecting User-Specific Video Highlights Using Content-Based Multi-Head Attention [52.84233165201391]
We propose a method to detect individualized highlights for users on given target videos based on their preferred highlight clips marked on previous videos they have watched.
Our method explicitly leverages the contents of both the preferred clips and the target videos using pre-trained features for the objects and the human activities.
arXiv Detail & Related papers (2022-07-18T02:32:48Z) - Leveraging Real Talking Faces via Self-Supervision for Robust Forgery
Detection [112.96004727646115]
We develop a method to detect face-manipulated videos using real talking faces.
We show that our method achieves state-of-the-art performance on cross-manipulation generalisation and robustness experiments.
Our results suggest that leveraging natural and unlabelled videos is a promising direction for the development of more robust face forgery detectors.
arXiv Detail & Related papers (2022-01-18T17:14:54Z) - What's wrong with this video? Comparing Explainers for Deepfake
Detection [13.089182408360221]
Deepfakes are computer manipulated videos where the face of an individual has been replaced with that of another.
In this work we develop, extend and compare white-box, black-box and model-specific techniques for explaining the labelling of real and fake videos.
In particular, we adapt SHAP, GradCAM and self-attention models to the task of explaining the predictions of state-of-the-art detectors based on EfficientNet.
arXiv Detail & Related papers (2021-05-12T18:44:39Z) - Identity-Driven DeepFake Detection [91.0504621868628]
Identity-Driven DeepFake Detection takes as input the suspect image/video as well as the target identity information.
We output a decision on whether the identity in the suspect image/video is the same as the target identity.
We present a simple identity-based detection algorithm called the OuterFace, which may serve as a baseline for further research.
arXiv Detail & Related papers (2020-12-07T18:59:08Z) - Learning Person Re-identification Models from Videos with Weak
Supervision [53.53606308822736]
We introduce the problem of learning person re-identification models from videos with weak supervision.
We propose a multiple instance attention learning framework for person re-identification using such video-level labels.
The attention weights are obtained based on all person images instead of person tracklets in a video, making our learned model less affected by noisy annotations.
arXiv Detail & Related papers (2020-07-21T07:23:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.