Do Content Management Systems Impact the Security of Free Content
Websites? A Correlation Analysis
- URL: http://arxiv.org/abs/2210.12083v1
- Date: Fri, 21 Oct 2022 16:19:09 GMT
- Title: Do Content Management Systems Impact the Security of Free Content
Websites? A Correlation Analysis
- Authors: Mohammed Alaqdhi and Abdulrahman Alabduljabbar and Kyle Thomas and
Saeed Salem and DaeHun Nyang and David Mohaisen
- Abstract summary: Assembling more than 1,500 websites with free and premium content, we identify their content management system (CMS) and malicious attributes.
We find that, despite the significant number of custom code websites, the use of CMS's is pervasive.
Even a small number of unpatched vulnerabilities in popular CMS's could be a potential cause for significant maliciousness.
- Score: 9.700241283477343
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper investigates the potential causes of the vulnerabilities of free
content websites to address risks and maliciousness. Assembling more than 1,500
websites with free and premium content, we identify their content management
system (CMS) and malicious attributes. We use frequency analysis at both the
aggregate and per category of content (books, games, movies, music, and
software), utilizing the unpatched vulnerabilities, total vulnerabilities,
malicious count, and percentiles to uncover trends and affinities of usage and
maliciousness of CMS{'s} and their contribution to those websites. Moreover, we
find that, despite the significant number of custom code websites, the use of
CMS{'s} is pervasive, with varying trends across types and categories. Finally,
we find that even a small number of unpatched vulnerabilities in popular
CMS{'s} could be a potential cause for significant maliciousness.
Related papers
- Illusions of Relevance: Using Content Injection Attacks to Deceive Retrievers, Rerankers, and LLM Judges [52.96987928118327]
We find that embedding models for retrieval, rerankers, and large language model (LLM) relevance judges are vulnerable to content injection attacks.
We identify two primary threats: (1) inserting unrelated or harmful content within passages that still appear deceptively "relevant", and (2) inserting entire queries or key query terms into passages to boost their perceived relevance.
Our study systematically examines the factors that influence an attack's success, such as the placement of injected content and the balance between relevant and non-relevant material.
arXiv Detail & Related papers (2025-01-30T18:02:15Z) - StopHC: A Harmful Content Detection and Mitigation Architecture for Social Media Platforms [0.46289929100614996]
textscStopHC is a harmful content detection and mitigation architecture for social media platforms.
Our solution contains two modules, one that employs deep neural network architecture for harmful content detection, and one that uses a network immunization algorithm to block toxic nodes and stop the spread of harmful content.
arXiv Detail & Related papers (2024-11-09T10:23:22Z) - Securing the Web: Analysis of HTTP Security Headers in Popular Global Websites [2.7039386580759666]
Over half of the websites examined (55.66%) received a dismal security grade of 'F'
These low scores expose multiple issues such as weak implementation of Content Security Policies (CSP), neglect of HSTS guidelines, and insufficient application of Subresource Integrity (SRI)
arXiv Detail & Related papers (2024-10-19T01:03:59Z) - "Glue pizza and eat rocks" -- Exploiting Vulnerabilities in Retrieval-Augmented Generative Models [74.05368440735468]
Retrieval-Augmented Generative (RAG) models enhance Large Language Models (LLMs)
In this paper, we demonstrate a security threat where adversaries can exploit the openness of these knowledge bases.
arXiv Detail & Related papers (2024-06-26T05:36:23Z) - On Security Weaknesses and Vulnerabilities in Deep Learning Systems [32.14068820256729]
We specifically look into deep learning (DL) framework and perform the first systematic study of vulnerabilities in DL systems.
We propose a two-stream data analysis framework to explore vulnerability patterns from various databases.
We conducted a large-scale empirical study of 3,049 DL vulnerabilities to better understand the patterns of vulnerability and the challenges in fixing them.
arXiv Detail & Related papers (2024-06-12T23:04:13Z) - ToxVidLM: A Multimodal Framework for Toxicity Detection in Code-Mixed Videos [46.148023197749396]
ToxVidLM incorporates three key modules - the multimodal module, Cross-Modal Synchronization module, and Multitask module.
This paper introduces a benchmark dataset consisting of 931 videos with 4021 code-mixed Hindi-English utterances collected from YouTube.
arXiv Detail & Related papers (2024-05-31T05:40:56Z) - HOD: A Benchmark Dataset for Harmful Object Detection [3.755082744150185]
We present a new benchmark dataset for harmful object detection.
Our proposed dataset contains more than 10,000 images across 6 categories that might be harmful.
We have conducted extensive experiments to evaluate the effectiveness of our proposed dataset.
arXiv Detail & Related papers (2023-10-08T15:00:38Z) - User Attitudes to Content Moderation in Web Search [49.1574468325115]
We examine the levels of support for different moderation practices applied to potentially misleading and/or potentially offensive content in web search.
We find that the most supported practice is informing users about potentially misleading or offensive content, and the least supported one is the complete removal of search results.
More conservative users and users with lower levels of trust in web search results are more likely to be against content moderation in web search.
arXiv Detail & Related papers (2023-10-05T10:57:15Z) - An Image is Worth a Thousand Toxic Words: A Metamorphic Testing
Framework for Content Moderation Software [64.367830425115]
Social media platforms are being increasingly misused to spread toxic content, including hate speech, malicious advertising, and pornography.
Despite tremendous efforts in developing and deploying content moderation methods, malicious users can evade moderation by embedding texts into images.
We propose a metamorphic testing framework for content moderation software.
arXiv Detail & Related papers (2023-08-18T20:33:06Z) - Measuring and Modeling the Free Content Web [13.982229874909978]
We investigate the similarities and differences between free content and premium websites.
For risk analysis, we consider and examine the maliciousness of these websites at the website- and component-level.
arXiv Detail & Related papers (2023-04-26T04:17:43Z) - Detecting Harmful Content On Online Platforms: What Platforms Need Vs.
Where Research Efforts Go [44.774035806004214]
harmful content on online platforms comes in many different forms including hate speech, offensive language, bullying and harassment, misinformation, spam, violence, graphic content, sexual abuse, self harm, and many other.
Online platforms seek to moderate such content to limit societal harm, to comply with legislation, and to create a more inclusive environment for their users.
There is currently a dichotomy between what types of harmful content online platforms seek to curb, and what research efforts there are to automatically detect such content.
arXiv Detail & Related papers (2021-02-27T08:01:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.