Detection and Prevention of Smishing Attacks
- URL: http://arxiv.org/abs/2501.00260v1
- Date: Tue, 31 Dec 2024 04:07:12 GMT
- Title: Detection and Prevention of Smishing Attacks
- Authors: Diksha Goel,
- Abstract summary: This work presents a smishing detection model using a content-based analysis approach.
To address the challenge posed by slang, abbreviations, and short forms in text communication, the model normalizes these into standard forms.
Experimental results demonstrate the model effectiveness, achieving classification accuracies of 97.14% for smishing and 96.12% for ham messages.
- Score: 0.0
- License:
- Abstract: Phishing is an online identity theft technique where attackers steal users personal information, leading to financial losses for individuals and organizations. With the increasing adoption of smartphones, which provide functionalities similar to desktop computers, attackers are targeting mobile users. Smishing, a phishing attack carried out through Short Messaging Service (SMS), has become prevalent due to the widespread use of SMS-based services. It involves deceptive messages designed to extract sensitive information. Despite the growing number of smishing attacks, limited research focuses on detecting these threats. This work presents a smishing detection model using a content-based analysis approach. To address the challenge posed by slang, abbreviations, and short forms in text communication, the model normalizes these into standard forms. A machine learning classifier is employed to classify messages as smishing or ham. Experimental results demonstrate the model effectiveness, achieving classification accuracies of 97.14% for smishing and 96.12% for ham messages, with an overall accuracy of 96.20%.
Related papers
- Machine Learning Driven Smishing Detection Framework for Mobile Security [0.46873264197900916]
smishing is a sophisticated variant of phishing conducted via SMS.
Traditional detection methods struggle with the informal and evolving nature of SMS language.
This paper presents an enhanced content-based smishing detection framework.
arXiv Detail & Related papers (2024-12-09T08:20:20Z) - MASKDROID: Robust Android Malware Detection with Masked Graph Representations [56.09270390096083]
We propose MASKDROID, a powerful detector with a strong discriminative ability to identify malware.
We introduce a masking mechanism into the Graph Neural Network based framework, forcing MASKDROID to recover the whole input graph.
This strategy enables the model to understand the malicious semantics and learn more stable representations, enhancing its robustness against adversarial attacks.
arXiv Detail & Related papers (2024-09-29T07:22:47Z) - Evaluating the Efficacy of Large Language Models in Identifying Phishing Attempts [2.6012482282204004]
Phishing, a prevalent cybercrime tactic for decades, remains a significant threat in today's digital world.
This paper aims to analyze the effectiveness of 15 Large Language Models (LLMs) in detecting phishing attempts.
arXiv Detail & Related papers (2024-04-23T19:55:18Z) - A Quantitative Study of SMS Phishing Detection [0.0]
We conducted an online survey on smishing detection with 187 participants.
We presented them with 16 SMS screenshots and evaluated how different factors affect their decision making process in smishing detection.
We found that participants had more difficulty identifying real messages from fake ones, with an accuracy of 67.1% with fake messages and 43.6% with real messages.
arXiv Detail & Related papers (2023-11-12T17:56:42Z) - Can Sensitive Information Be Deleted From LLMs? Objectives for Defending
Against Extraction Attacks [73.53327403684676]
We propose an attack-and-defense framework for studying the task of deleting sensitive information directly from model weights.
We study direct edits to model weights because this approach should guarantee that particular deleted information is never extracted by future prompt attacks.
We show that even state-of-the-art model editing methods such as ROME struggle to truly delete factual information from models like GPT-J, as our whitebox and blackbox attacks can recover "deleted" information from an edited model 38% of the time.
arXiv Detail & Related papers (2023-09-29T17:12:43Z) - Commercial Anti-Smishing Tools and Their Comparative Effectiveness Against Modern Threats [0.0]
We developed a test bed for measuring the effectiveness of popular anti-smishing tools against fresh smishing attacks.
Most anti-phishing apps and bulk messaging services didn't filter smishing messages beyond the carrier blocking.
While carriers did not block any benign messages, they were only able to reach a 25-35% blocking rate for smishing messages.
arXiv Detail & Related papers (2023-09-14T06:08:22Z) - Verifying the Robustness of Automatic Credibility Assessment [50.55687778699995]
We show that meaning-preserving changes in input text can mislead the models.
We also introduce BODEGA: a benchmark for testing both victim models and attack methods on misinformation detection tasks.
Our experimental results show that modern large language models are often more vulnerable to attacks than previous, smaller solutions.
arXiv Detail & Related papers (2023-03-14T16:11:47Z) - Email Summarization to Assist Users in Phishing Identification [1.433758865948252]
Cyber-phishing attacks are more precise, targeted, and tailored by training data to activate only in the presence of specific information or cues.
This work leverages transformer-based machine learning to analyze prospective psychological triggers.
We then amalgamate this information and present it to the user to allow them to (i) easily decide whether the email is "phishy" and (ii) self-learn advanced malicious patterns.
arXiv Detail & Related papers (2022-03-24T23:03:46Z) - Deep convolutional forest: a dynamic deep ensemble approach for spam
detection in text [219.15486286590016]
This paper introduces a dynamic deep ensemble model for spam detection that adjusts its complexity and extracts features automatically.
As a result, the model achieved high precision, recall, f1-score and accuracy of 98.38%.
arXiv Detail & Related papers (2021-10-10T17:19:37Z) - Robust and Verifiable Information Embedding Attacks to Deep Neural
Networks via Error-Correcting Codes [81.85509264573948]
In the era of deep learning, a user often leverages a third-party machine learning tool to train a deep neural network (DNN) classifier.
In an information embedding attack, an attacker is the provider of a malicious third-party machine learning tool.
In this work, we aim to design information embedding attacks that are verifiable and robust against popular post-processing methods.
arXiv Detail & Related papers (2020-10-26T17:42:42Z) - Backdoor Attack against Speaker Verification [86.43395230456339]
We show that it is possible to inject the hidden backdoor for infecting speaker verification models by poisoning the training data.
We also demonstrate that existing backdoor attacks cannot be directly adopted in attacking speaker verification.
arXiv Detail & Related papers (2020-10-22T11:10:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.