Safeguarding Virtual Healthcare: A Novel Attacker-Centric Model for Data Security and Privacy
- URL: http://arxiv.org/abs/2412.13440v1
- Date: Wed, 18 Dec 2024 02:21:53 GMT
- Title: Safeguarding Virtual Healthcare: A Novel Attacker-Centric Model for Data Security and Privacy
- Authors: Suvineetha Herath, Haywood Gelman, John Hastings, Yong Wang,
- Abstract summary: Remote healthcare delivery has introduced significant security and privacy risks to protected health information (PHI)
This study investigates the root causes of such security incidents and introduces the Attacker-Centric Approach (ACA)
ACA addresses limitations in existing threat models and regulatory frameworks by adopting a holistic attacker-focused perspective.
- Score: 3.537571223616615
- License:
- Abstract: The rapid growth of remote healthcare delivery has introduced significant security and privacy risks to protected health information (PHI). Analysis of a comprehensive healthcare security breach dataset covering 2009-2023 reveals their significant prevalence and impact. This study investigates the root causes of such security incidents and introduces the Attacker-Centric Approach (ACA), a novel threat model tailored to protect PHI. ACA addresses limitations in existing threat models and regulatory frameworks by adopting a holistic attacker-focused perspective, examining threats from the viewpoint of cyber adversaries, their motivations, tactics, and potential attack vectors. Leveraging established risk management frameworks, ACA provides a multi-layered approach to threat identification, risk assessment, and proactive mitigation strategies. A comprehensive threat library classifies physical, third-party, external, and internal threats. ACA's iterative nature and feedback mechanisms enable continuous adaptation to emerging threats, ensuring sustained effectiveness. ACA allows healthcare providers to proactively identify and mitigate vulnerabilities, fostering trust and supporting the secure adoption of virtual care technologies.
Related papers
- Safety at Scale: A Comprehensive Survey of Large Model Safety [299.801463557549]
We present a comprehensive taxonomy of safety threats to large models, including adversarial attacks, data poisoning, backdoor attacks, jailbreak and prompt injection attacks, energy-latency attacks, data and model extraction attacks, and emerging agent-specific threats.
We identify and discuss the open challenges in large model safety, emphasizing the need for comprehensive safety evaluations, scalable and effective defense mechanisms, and sustainable data practices.
arXiv Detail & Related papers (2025-02-02T05:14:22Z) - Global Challenge for Safe and Secure LLMs Track 1 [57.08717321907755]
The Global Challenge for Safe and Secure Large Language Models (LLMs) is a pioneering initiative organized by AI Singapore (AISG) and the CyberSG R&D Programme Office (CRPO)
This paper introduces the Global Challenge for Safe and Secure Large Language Models (LLMs), a pioneering initiative organized by AI Singapore (AISG) and the CyberSG R&D Programme Office (CRPO) to foster the development of advanced defense mechanisms against automated jailbreaking attacks.
arXiv Detail & Related papers (2024-11-21T08:20:31Z) - Enhancing Guardrails for Safe and Secure Healthcare AI [0.0]
I propose enhancements to existing guardrails frameworks, such as Nvidia NeMo Guardrails, to better suit healthcare-specific needs.
I aim to ensure the secure, reliable, and accurate use of AI in healthcare, mitigating misinformation risks and improving patient safety.
arXiv Detail & Related papers (2024-09-25T06:30:06Z) - SoK: Security and Privacy Risks of Medical AI [14.592921477833848]
The integration of technology and healthcare has ushered in a new era where software systems, powered by artificial intelligence and machine learning, have become essential components of medical products and services.
This paper explores the security and privacy threats posed by AI/ML applications in healthcare.
arXiv Detail & Related papers (2024-09-11T16:59:58Z) - Rethinking the Vulnerabilities of Face Recognition Systems:From a Practical Perspective [53.24281798458074]
Face Recognition Systems (FRS) have increasingly integrated into critical applications, including surveillance and user authentication.
Recent studies have revealed vulnerabilities in FRS to adversarial (e.g., adversarial patch attacks) and backdoor attacks (e.g., training data poisoning)
arXiv Detail & Related papers (2024-05-21T13:34:23Z) - The MESA Security Model 2.0: A Dynamic Framework for Mitigating Stealth Data Exfiltration [0.0]
Stealth Data Exfiltration is a significant cyber threat characterized by covert infiltration, extended undetectability, and unauthorized dissemination of confidential data.
Our findings reveal that conventional defense-in-depth strategies often fall short in combating these sophisticated threats.
As we navigate this complex landscape, it is crucial to anticipate potential threats and continually update our defenses.
arXiv Detail & Related papers (2024-05-17T16:14:45Z) - A Zero Trust Framework for Realization and Defense Against Generative AI
Attacks in Power Grid [62.91192307098067]
This paper proposes a novel zero trust framework for a power grid supply chain (PGSC)
It facilitates early detection of potential GenAI-driven attack vectors, assessment of tail risk-based stability measures, and mitigation of such threats.
Experimental results show that the proposed zero trust framework achieves an accuracy of 95.7% on attack vector generation, a risk measure of 9.61% for a 95% stable PGSC, and a 99% confidence in defense against GenAI-driven attack.
arXiv Detail & Related papers (2024-03-11T02:47:21Z) - The New Frontier of Cybersecurity: Emerging Threats and Innovations [0.0]
The research delves into the consequences of these threats on individuals, organizations, and society at large.
The sophistication and diversity of these emerging threats necessitate a multi-layered approach to cybersecurity.
This study emphasizes the importance of implementing effective measures to mitigate these threats.
arXiv Detail & Related papers (2023-11-05T12:08:20Z) - White paper on cybersecurity in the healthcare sector. The HEIR solution [1.3717071154980571]
Patient data, including medical records and financial information, are at risk, potentially leading to identity theft and patient safety concerns.
The HEIR project offers a comprehensive cybersecurity approach, promoting security features from various regulatory frameworks.
These measures aim to enhance digital health security and protect sensitive patient data while facilitating secure data access and privacy-aware techniques.
arXiv Detail & Related papers (2023-10-16T07:27:57Z) - On the Security Risks of Knowledge Graph Reasoning [71.64027889145261]
We systematize the security threats to KGR according to the adversary's objectives, knowledge, and attack vectors.
We present ROAR, a new class of attacks that instantiate a variety of such threats.
We explore potential countermeasures against ROAR, including filtering of potentially poisoning knowledge and training with adversarially augmented queries.
arXiv Detail & Related papers (2023-05-03T18:47:42Z) - Digital Ariadne: Citizen Empowerment for Epidemic Control [55.41644538483948]
The COVID-19 crisis represents the most dangerous threat to public health since the H1N1 pandemic of 1918.
Technology-assisted location and contact tracing, if broadly adopted, may help limit the spread of infectious diseases.
We present a tool, called 'diAry' or 'digital Ariadne', based on voluntary location and Bluetooth tracking on personal devices.
arXiv Detail & Related papers (2020-04-16T15:53:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.