Securing Automated Insulin Delivery Systems: A Review of Security Threats and Protectives Strategies
- URL: http://arxiv.org/abs/2503.14006v1
- Date: Tue, 18 Mar 2025 08:11:19 GMT
- Title: Securing Automated Insulin Delivery Systems: A Review of Security Threats and Protectives Strategies
- Authors: Yuchen Niu, Siew-Kei Lam,
- Abstract summary: Automated insulin delivery (AID) systems have emerged as a significant technological advancement in diabetes care.<n>The reliance on wireless connectivity and software control has exposed AID systems to critical security risks.<n>Despite recent advancements, several open challenges remain in achieving secure AID systems.
- Score: 12.306501785982018
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automated insulin delivery (AID) systems have emerged as a significant technological advancement in diabetes care. These systems integrate a continuous glucose monitor, an insulin pump, and control algorithms to automate insulin delivery, reducing the burden of self-management and offering enhanced glucose control. However, the increasing reliance on wireless connectivity and software control has exposed AID systems to critical security risks that could result in life-threatening treatment errors. This review first presents a comprehensive examination of the security landscape, covering technical vulnerabilities, legal frameworks, and commercial product considerations, and an analysis of existing research on attack vectors, defence mechanisms, as well as evaluation methods and resources for AID systems. Despite recent advancements, several open challenges remain in achieving secure AID systems, particularly in standardising security evaluation frameworks and developing comprehensive, lightweight, and adaptive defence strategies. As one of the most widely adopted and extensively studied physiologic closed-loop control systems, this review serves as a valuable reference for understanding security challenges and solutions applicable to analogous medical systems.
Related papers
- strideSEA: A STRIDE-centric Security Evaluation Approach [1.996354642790599]
strideSEA integrates STRIDE as the central classification scheme into the security activities of threat modeling, attack scenario analysis, risk analysis, and countermeasure recommendation.
The application of strideSEA is demonstrated in a real-world online immunization system case study.
arXiv Detail & Related papers (2025-03-24T18:00:17Z) - In-House Evaluation Is Not Enough: Towards Robust Third-Party Flaw Disclosure for General-Purpose AI [93.33036653316591]
We call for three interventions to advance system safety.
First, we propose using standardized AI flaw reports and rules of engagement for researchers.
Second, we propose GPAI system providers adopt broadly-scoped flaw disclosure programs.
Third, we advocate for the development of improved infrastructure to coordinate distribution of flaw reports.
arXiv Detail & Related papers (2025-03-21T05:09:46Z) - Open Problems in Machine Unlearning for AI Safety [61.43515658834902]
Machine unlearning -- the ability to selectively forget or suppress specific types of knowledge -- has shown promise for privacy and data removal tasks.<n>In this paper, we identify key limitations that prevent unlearning from serving as a comprehensive solution for AI safety.
arXiv Detail & Related papers (2025-01-09T03:59:10Z) - Safeguarding Virtual Healthcare: A Novel Attacker-Centric Model for Data Security and Privacy [3.537571223616615]
Remote healthcare delivery has introduced significant security and privacy risks to protected health information (PHI)<n>This study investigates the root causes of such security incidents and introduces the Attacker-Centric Approach (ACA)<n>ACA addresses limitations in existing threat models and regulatory frameworks by adopting a holistic attacker-focused perspective.
arXiv Detail & Related papers (2024-12-18T02:21:53Z) - Securing Legacy Communication Networks via Authenticated Cyclic Redundancy Integrity Check [98.34702864029796]
We propose Authenticated Cyclic Redundancy Integrity Check (ACRIC)
ACRIC preserves backward compatibility without requiring additional hardware and is protocol agnostic.
We show that ACRIC offers robust security with minimal transmission overhead ( 1 ms)
arXiv Detail & Related papers (2024-11-21T18:26:05Z) - ADAPT: A Game-Theoretic and Neuro-Symbolic Framework for Automated Distributed Adaptive Penetration Testing [13.101825065498552]
The integration of AI into modern critical infrastructure systems, such as healthcare, has introduced new vulnerabilities.
ADAPT is a game-theoretic and neuro-symbolic framework for automated distributed adaptive penetration testing.
arXiv Detail & Related papers (2024-10-31T21:32:17Z) - Counter Denial of Service for Next-Generation Networks within the Artificial Intelligence and Post-Quantum Era [2.156208381257605]
DoS attacks are becoming increasingly sophisticated and easily executable.
State-of-the-art systematization efforts have limitations such as isolated DoS countermeasures.
The emergence of quantum computers is a game changer for DoS from attack and defense perspectives.
arXiv Detail & Related papers (2024-08-08T18:47:31Z) - Sok: Comprehensive Security Overview, Challenges, and Future Directions of Voice-Controlled Systems [10.86045604075024]
The integration of Voice Control Systems into smart devices accentuates the importance of their security.
Current research has uncovered numerous vulnerabilities in VCS, presenting significant risks to user privacy and security.
This study introduces a hierarchical model structure for VCS, providing a novel lens for categorizing and analyzing existing literature in a systematic manner.
We classify attacks based on their technical principles and thoroughly evaluate various attributes, such as their methods, targets, vectors, and behaviors.
arXiv Detail & Related papers (2024-05-27T12:18:46Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - Enhancing SCADA Security: Developing a Host-Based Intrusion Detection System to Safeguard Against Cyberattacks [2.479074862022315]
SCADA systems are prone to cyberattacks, posing risks to critical infrastructure.
This work proposes a host-based intrusion detection system tailored for SCADA systems in smart grids.
arXiv Detail & Related papers (2024-02-22T14:47:42Z) - Leveraging Traceability to Integrate Safety Analysis Artifacts into the
Software Development Process [51.42800587382228]
Safety assurance cases (SACs) can be challenging to maintain during system evolution.
We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models.
We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety.
arXiv Detail & Related papers (2023-07-14T16:03:27Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.