Do Content Management Systems Impact the Security of Free Content
Websites? A Correlation Analysis
- URL: http://arxiv.org/abs/2210.12083v1
- Date: Fri, 21 Oct 2022 16:19:09 GMT
- Title: Do Content Management Systems Impact the Security of Free Content
Websites? A Correlation Analysis
- Authors: Mohammed Alaqdhi and Abdulrahman Alabduljabbar and Kyle Thomas and
Saeed Salem and DaeHun Nyang and David Mohaisen
- Abstract summary: Assembling more than 1,500 websites with free and premium content, we identify their content management system (CMS) and malicious attributes.
We find that, despite the significant number of custom code websites, the use of CMS's is pervasive.
Even a small number of unpatched vulnerabilities in popular CMS's could be a potential cause for significant maliciousness.
- Score: 9.700241283477343
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper investigates the potential causes of the vulnerabilities of free
content websites to address risks and maliciousness. Assembling more than 1,500
websites with free and premium content, we identify their content management
system (CMS) and malicious attributes. We use frequency analysis at both the
aggregate and per category of content (books, games, movies, music, and
software), utilizing the unpatched vulnerabilities, total vulnerabilities,
malicious count, and percentiles to uncover trends and affinities of usage and
maliciousness of CMS{'s} and their contribution to those websites. Moreover, we
find that, despite the significant number of custom code websites, the use of
CMS{'s} is pervasive, with varying trends across types and categories. Finally,
we find that even a small number of unpatched vulnerabilities in popular
CMS{'s} could be a potential cause for significant maliciousness.
Related papers
- Securing the Web: Analysis of HTTP Security Headers in Popular Global Websites [2.7039386580759666]
Over half of the websites examined (55.66%) received a dismal security grade of 'F'
These low scores expose multiple issues such as weak implementation of Content Security Policies (CSP), neglect of HSTS guidelines, and insufficient application of Subresource Integrity (SRI)
arXiv Detail & Related papers (2024-10-19T01:03:59Z) - Accessibility Issues in Ad-Driven Web Applications [3.9531869396416344]
Third-party advertisements (ads) are a vital revenue source for free web services, but they introduce significant accessibility challenges.
We conduct the first large-scale investigation of 430K website elements, including nearly 100K ad elements, to understand the accessibility of ads on websites.
arXiv Detail & Related papers (2024-09-27T09:50:06Z) - "Glue pizza and eat rocks" -- Exploiting Vulnerabilities in Retrieval-Augmented Generative Models [74.05368440735468]
Retrieval-Augmented Generative (RAG) models enhance Large Language Models (LLMs)
In this paper, we demonstrate a security threat where adversaries can exploit the openness of these knowledge bases.
arXiv Detail & Related papers (2024-06-26T05:36:23Z) - On Security Weaknesses and Vulnerabilities in Deep Learning Systems [32.14068820256729]
We specifically look into deep learning (DL) framework and perform the first systematic study of vulnerabilities in DL systems.
We propose a two-stream data analysis framework to explore vulnerability patterns from various databases.
We conducted a large-scale empirical study of 3,049 DL vulnerabilities to better understand the patterns of vulnerability and the challenges in fixing them.
arXiv Detail & Related papers (2024-06-12T23:04:13Z) - ToxVidLM: A Multimodal Framework for Toxicity Detection in Code-Mixed Videos [46.148023197749396]
ToxVidLM incorporates three key modules - the multimodal module, Cross-Modal Synchronization module, and Multitask module.
This paper introduces a benchmark dataset consisting of 931 videos with 4021 code-mixed Hindi-English utterances collected from YouTube.
arXiv Detail & Related papers (2024-05-31T05:40:56Z) - HOD: A Benchmark Dataset for Harmful Object Detection [3.755082744150185]
We present a new benchmark dataset for harmful object detection.
Our proposed dataset contains more than 10,000 images across 6 categories that might be harmful.
We have conducted extensive experiments to evaluate the effectiveness of our proposed dataset.
arXiv Detail & Related papers (2023-10-08T15:00:38Z) - User Attitudes to Content Moderation in Web Search [49.1574468325115]
We examine the levels of support for different moderation practices applied to potentially misleading and/or potentially offensive content in web search.
We find that the most supported practice is informing users about potentially misleading or offensive content, and the least supported one is the complete removal of search results.
More conservative users and users with lower levels of trust in web search results are more likely to be against content moderation in web search.
arXiv Detail & Related papers (2023-10-05T10:57:15Z) - An Image is Worth a Thousand Toxic Words: A Metamorphic Testing
Framework for Content Moderation Software [64.367830425115]
Social media platforms are being increasingly misused to spread toxic content, including hate speech, malicious advertising, and pornography.
Despite tremendous efforts in developing and deploying content moderation methods, malicious users can evade moderation by embedding texts into images.
We propose a metamorphic testing framework for content moderation software.
arXiv Detail & Related papers (2023-08-18T20:33:06Z) - Measuring and Modeling the Free Content Web [13.982229874909978]
We investigate the similarities and differences between free content and premium websites.
For risk analysis, we consider and examine the maliciousness of these websites at the website- and component-level.
arXiv Detail & Related papers (2023-04-26T04:17:43Z) - Detecting Harmful Content On Online Platforms: What Platforms Need Vs.
Where Research Efforts Go [44.774035806004214]
harmful content on online platforms comes in many different forms including hate speech, offensive language, bullying and harassment, misinformation, spam, violence, graphic content, sexual abuse, self harm, and many other.
Online platforms seek to moderate such content to limit societal harm, to comply with legislation, and to create a more inclusive environment for their users.
There is currently a dichotomy between what types of harmful content online platforms seek to curb, and what research efforts there are to automatically detect such content.
arXiv Detail & Related papers (2021-02-27T08:01:10Z) - Quantifying the Vulnerabilities of the Online Public Square to Adversarial Manipulation Tactics [43.98568073610101]
We use a social media model to quantify the impacts of several adversarial manipulation tactics on the quality of content.
We find that the presence of influential accounts, a hallmark of social media, exacerbates the vulnerabilities of online communities to manipulation.
These insights suggest countermeasures that platforms could employ to increase the resilience of social media users to manipulation.
arXiv Detail & Related papers (2019-07-13T21:12:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.