Threat Detection in Social Media Networks Using Machine Learning Based Network Analysis
- URL: http://arxiv.org/abs/2601.02581v1
- Date: Mon, 05 Jan 2026 22:14:41 GMT
- Title: Threat Detection in Social Media Networks Using Machine Learning Based Network Analysis
- Authors: Aditi Sanjay Agrawal,
- Abstract summary: This paper introduces a threat detection framework based on machine learning that can be used to classify malicious behavior in the social media network environment.<n>A model of artificial neural network (ANN) is then created to acquire intricate, non-linear tendencies of malicious actions.<n>The proposed model is tested on conventional performance metrics, such as accuracy, accuracy, recall, F1-score, and ROC-AUC, and shows good detection and high levels of strength.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: The accelerated development of social media websites has posed intricate security issues in cyberspace, where these sites have increasingly become victims of criminal activities including attempts to intrude into them, abnormal traffic patterns, and organized attacks. The conventional rule-based security systems are not always scalable and dynamic to meet such a threat. This paper introduces a threat detection framework based on machine learning that can be used to classify malicious behavior in the social media network environment based on the nature of network traffic. Exploiting a rich network traffic dataset, the massive preprocessing and exploratory data analysis is conducted to overcome the problem of data imbalance, feature inconsistency, and noise. A model of artificial neural network (ANN) is then created to acquire intricate, non-linear tendencies of malicious actions. The proposed model is tested on conventional performance metrics, such as accuracy, accuracy, recall, F1-score, and ROC-AUC, and shows good detection and high levels of strength. The findings suggest that neural network-based solutions have the potential to be used effectively to identify the latent threat dynamics within the context of a large-scale social media network and that they can be employed to complement the existing intrusion detection system and better to conduct proactive cybersecurity operations.
Related papers
- Neurosymbolic Learning for Advanced Persistent Threat Detection under Extreme Class Imbalance [29.991707658663188]
This paper proposes a neurosymbolic architecture that integrates an optimized BERT model with logic tensor networks (LTN) for explainable APT detection in wireless IoT networks.<n>Results show that neurosymbolic learning enables high-performance, interpretable, and operationally viable APT detection for IoT network monitoring architectures.
arXiv Detail & Related papers (2026-02-28T04:25:50Z) - Adaptive Intrusion Detection System Leveraging Dynamic Neural Models with Adversarial Learning for 5G/6G Networks [2.062593640149623]
This paper presents an advanced IDS framework that leverages adversarial training and dynamic neural networks in 5G/6G networks.<n>Unlike conventional models, which require costly retraining to update knowledge, the proposed framework integrates incremental learning algorithms, reducing the need for frequent retraining.
arXiv Detail & Related papers (2025-12-11T13:40:37Z) - Rethinking Spatio-Temporal Anomaly Detection: A Vision for Causality-Driven Cybersecurity [22.491097360752903]
We advocate for a causal learning perspective to advance anomaly detection in spatially distributed infrastructures.<n>We identify and formalize three key directions: causal graph profiling, multi-view fusion, and continual causal graph learning.<n>Our objective is to lay a new research trajectory toward scalable, adaptive, explainable, and spatially grounded anomaly detection systems.
arXiv Detail & Related papers (2025-07-10T21:19:28Z) - Expert-in-the-Loop Systems with Cross-Domain and In-Domain Few-Shot Learning for Software Vulnerability Detection [38.083049237330826]
This study explores the use of Large Language Models (LLMs) in software vulnerability assessment by simulating the identification of Python code with known Common Weaknessions (CWEs)<n>Our results indicate that while zero-shot prompting performs poorly, few-shot prompting significantly enhances classification performance.<n> challenges such as model reliability, interpretability, and adversarial robustness remain critical areas for future research.
arXiv Detail & Related papers (2025-06-11T18:43:51Z) - Adaptive Cybersecurity: Dynamically Retrainable Firewalls for Real-Time Network Protection [4.169915659794567]
This research introduces "Dynamically Retrainable Firewalls"<n>Unlike traditional firewalls that rely on static rules to inspect traffic, these advanced systems leverage machine learning algorithms to analyze network traffic pattern dynamically and identify threats.<n>It also discusses strategies to improve performance, reduce latency, optimize resource utilization, and address integration issues with present-day concepts such as Zero Trust and mixed environments.
arXiv Detail & Related papers (2025-01-14T00:04:35Z) - Countering Autonomous Cyber Threats [40.00865970939829]
Foundation Models present dual-use concerns broadly and within the cyber domain specifically.
Recent research has shown the potential for these advanced models to inform or independently execute offensive cyberspace operations.
This work evaluates several state-of-the-art FMs on their ability to compromise machines in an isolated network and investigates defensive mechanisms to defeat such AI-powered attacks.
arXiv Detail & Related papers (2024-10-23T22:46:44Z) - Securing Distributed Network Digital Twin Systems Against Model Poisoning Attacks [19.697853431302768]
Digital twins (DTs) embody real-time monitoring, predictive, and enhanced decision-making capabilities.<n>This study investigates the security challenges in distributed network DT systems, which potentially undermine the reliability of subsequent network applications.
arXiv Detail & Related papers (2024-07-02T03:32:09Z) - A Synergistic Approach In Network Intrusion Detection By Neurosymbolic AI [6.315966022962632]
This paper explores the potential of incorporating Neurosymbolic Artificial Intelligence (NSAI) into Network Intrusion Detection Systems (NIDS)
NSAI combines deep learning's data-driven strengths with symbolic AI's logical reasoning to tackle the dynamic challenges in cybersecurity.
The inclusion of NSAI in NIDS marks potential advancements in both the detection and interpretation of intricate network threats.
arXiv Detail & Related papers (2024-06-03T02:24:01Z) - Utilizing Deep Learning for Enhancing Network Resilience in Finance [0.0]
This paper uses deep learning for advanced threat detection to improve protective measures in the financial industry.
The detection technology mainly uses statistical machine learning methods.
arXiv Detail & Related papers (2024-02-15T09:35:57Z) - Efficient Network Representation for GNN-based Intrusion Detection [2.321323878201932]
The last decades have seen a growth in the number of cyber-attacks with severe economic and privacy damages.
We propose a novel network representation as a graph of flows that aims to provide relevant topological information for the intrusion detection task.
We present a Graph Neural Network (GNN) based framework responsible for exploiting the proposed graph structure.
arXiv Detail & Related papers (2023-09-11T16:10:12Z) - Learning to Detect: A Data-driven Approach for Network Intrusion
Detection [17.288512506016612]
We perform a comprehensive study on NSL-KDD, a network traffic dataset, by visualizing patterns and employing different learning-based models to detect cyber attacks.
Unlike previous shallow learning and deep learning models that use the single learning model approach for intrusion detection, we adopt a hierarchy strategy.
We demonstrate the advantage of the unsupervised representation learning model in binary intrusion detection tasks.
arXiv Detail & Related papers (2021-08-18T21:19:26Z) - Security of Distributed Machine Learning: A Game-Theoretic Approach to
Design Secure DSVM [31.480769801354413]
This work aims to develop secure distributed algorithms to protect the learning from data poisoning and network attacks.
We establish a game-theoretic framework to capture the conflicting goals of a learner who uses distributed support vector machines (SVMs) and an attacker who is capable of modifying training data and labels.
The numerical results show that distributed SVM is prone to fail in different types of attacks, and their impact has a strong dependence on the network structure and attack capabilities.
arXiv Detail & Related papers (2020-03-08T18:54:17Z) - Can't Boil This Frog: Robustness of Online-Trained Autoencoder-Based
Anomaly Detectors to Adversarial Poisoning Attacks [26.09388179354751]
We present the first study focused on poisoning attacks on online-trained autoencoder-based attack detectors.
We show that the proposed algorithms can generate poison samples that cause the target attack to go undetected by the autoencoder detector.
This finding suggests that neural network-based attack detectors used in the cyber-physical domain are more robust to poisoning than in other problem domains.
arXiv Detail & Related papers (2020-02-07T12:41:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.