Cybersecurity Misinformation Detection on Social Media: Case Studies on
Phishing Reports and Zoom's Threats
- URL: http://arxiv.org/abs/2110.12296v3
- Date: Wed, 17 Aug 2022 18:18:02 GMT
- Title: Cybersecurity Misinformation Detection on Social Media: Case Studies on
Phishing Reports and Zoom's Threats
- Authors: Mohit Singhal, Nihal Kumarswamy, Shreyasi Kinhekar, Shirin Nilizadeh
- Abstract summary: We propose novel approaches for detecting misinformation about cybersecurity and privacy threats on social media.
We developed a framework for detecting inaccurate phishing claims on Twitter.
We also proposed another framework for detecting misinformation related to Zoom's security and privacy threats on multiple platforms.
- Score: 1.2387676601792899
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Prior work has extensively studied misinformation related to news, politics,
and health, however, misinformation can also be about technological topics.
While less controversial, such misinformation can severely impact companies'
reputations and revenues, and users' online experiences. Recently, social media
has also been increasingly used as a novel source of knowledgebase for
extracting timely and relevant security threats, which are fed to the threat
intelligence systems for better performance. However, with possible campaigns
spreading false security threats, these systems can become vulnerable to
poisoning attacks. In this work, we proposed novel approaches for detecting
misinformation about cybersecurity and privacy threats on social media,
focusing on two topics with different types of misinformation: phishing
websites and Zoom's security & privacy threats. We developed a framework for
detecting inaccurate phishing claims on Twitter. Using this framework, we could
label about 9% of URLs and 22% of phishing reports as misinformation. We also
proposed another framework for detecting misinformation related to Zoom's
security and privacy threats on multiple platforms. Our classifiers showed
great performance with more than 98% accuracy. Employing these classifiers on
the posts from Facebook, Instagram, Reddit, and Twitter, we found respectively
that about 18%, 3%, 4%, and 3% of posts were misinformation. In addition, we
studied the characteristics of misinformation posts, their authors, and their
timelines, which helped us identify campaigns.
Related papers
- Privacy Aware Memory Forensics [3.382960674045592]
Recent surveys indicate that 60% of data breaches are primarily caused by malicious insider threats.
In this research, we present a novel solution to detect data leakages by insiders in an organization.
Our approach captures the RAM of the insiders device and analyses it for sensitive information leaks from a host system.
arXiv Detail & Related papers (2024-06-13T11:18:49Z) - Specious Sites: Tracking the Spread and Sway of Spurious News Stories at
Scale [6.917588580148212]
We identify 52,036 narratives on 1,334 unreliable news websites.
We show how our system can be utilized to detect new narratives originating from unreliable news websites.
arXiv Detail & Related papers (2023-08-03T22:42:30Z) - Fight Fire with Fire: Hacktivists' Take on Social Media Misinformation [6.421670116083633]
We interviewed 22 prominent hacktivists to learn their take on the increased proliferation of misinformation on social media.
None of them welcomed the nefarious appropriation of trolling and memes for the purpose of political (counter)argumentation and dissemination of propaganda.
We discuss the implications of these findings relative to the emergent recasting of hacktivism in defense of a constructive and factual social media discourse.
arXiv Detail & Related papers (2023-02-15T17:20:02Z) - Machine Learning-based Automatic Annotation and Detection of COVID-19
Fake News [8.020736472947581]
COVID-19 impacted every part of the world, although the misinformation about the outbreak traveled faster than the virus.
Existing work neglects the presence of bots that act as a catalyst in the spread.
We propose an automated approach for labeling data using verified fact-checked statements on a Twitter dataset.
arXiv Detail & Related papers (2022-09-07T13:55:59Z) - Incidental Data: Observation of Privacy Compromising Data on Social
Media Platforms [0.0]
We show how unindented published data can be revealed and further analyze possibilities that can potentially compromise one's privacy.
We were able to show that only 2 hours of manually fetching data are sufficient in order to unveil private personal information.
Our work has shown that awareness among persons on social media needs to be raised.
arXiv Detail & Related papers (2022-08-18T07:49:26Z) - Adherence to Misinformation on Social Media Through Socio-Cognitive and
Group-Based Processes [79.79659145328856]
We argue that when misinformation proliferates, this happens because the social media environment enables adherence to misinformation.
We make the case that polarization and misinformation adherence are closely tied.
arXiv Detail & Related papers (2022-06-30T12:34:24Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Misinfo Belief Frames: A Case Study on Covid & Climate News [49.979419711713795]
We propose a formalism for understanding how readers perceive the reliability of news and the impact of misinformation.
We introduce the Misinfo Belief Frames (MBF) corpus, a dataset of 66k inferences over 23.5k headlines.
Our results using large-scale language modeling to predict misinformation frames show that machine-generated inferences can influence readers' trust in news headlines.
arXiv Detail & Related papers (2021-04-18T09:50:11Z) - Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks
Without an Accuracy Tradeoff [57.35978884015093]
We show that strong data augmentations, such as CutMix, can significantly diminish the threat of poisoning and backdoor attacks without trading off performance.
In the context of backdoors, CutMix greatly mitigates the attack while simultaneously increasing validation accuracy by 9%.
arXiv Detail & Related papers (2020-11-18T20:18:50Z) - Political audience diversity and news reliability in algorithmic ranking [54.23273310155137]
We propose using the political diversity of a website's audience as a quality signal.
Using news source reliability ratings from domain experts and web browsing data from a diverse sample of 6,890 U.S. citizens, we first show that websites with more extreme and less politically diverse audiences have lower journalistic standards.
arXiv Detail & Related papers (2020-07-16T02:13:55Z) - Echo Chambers on Social Media: A comparative analysis [64.2256216637683]
We introduce an operational definition of echo chambers and perform a massive comparative analysis on 1B pieces of contents produced by 1M users on four social media platforms.
We infer the leaning of users about controversial topics and reconstruct their interaction networks by analyzing different features.
We find support for the hypothesis that platforms implementing news feed algorithms like Facebook may elicit the emergence of echo-chambers.
arXiv Detail & Related papers (2020-04-20T20:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.