Systematically Assessing the Security Risks of AI/ML-enabled Connected Healthcare Systems
- URL: http://arxiv.org/abs/2401.17136v2
- Date: Fri, 12 Apr 2024 00:33:58 GMT
- Title: Systematically Assessing the Security Risks of AI/ML-enabled Connected Healthcare Systems
- Authors: Mohammed Elnawawy, Mohammadreza Hallajiyan, Gargi Mitra, Shahrear Iqbal, Karthik Pattabiraman,
- Abstract summary: We show that the use of ML in medical systems has security risks that might cause life-threatening damage to a patient's health in case of adversarial interventions.
These new risks arise due to security vulnerabilities in the peripheral devices and communication channels.
We show that state-of-the-art risk assessment techniques are not adequate for identifying and assessing these new risks.
- Score: 4.508868068781058
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The adoption of machine-learning-enabled systems in the healthcare domain is on the rise. While the use of ML in healthcare has several benefits, it also expands the threat surface of medical systems. We show that the use of ML in medical systems, particularly connected systems that involve interfacing the ML engine with multiple peripheral devices, has security risks that might cause life-threatening damage to a patient's health in case of adversarial interventions. These new risks arise due to security vulnerabilities in the peripheral devices and communication channels. We present a case study where we demonstrate an attack on an ML-enabled blood glucose monitoring system by introducing adversarial data points during inference. We show that an adversary can achieve this by exploiting a known vulnerability in the Bluetooth communication channel connecting the glucose meter with the ML-enabled app. We further show that state-of-the-art risk assessment techniques are not adequate for identifying and assessing these new risks. Our study highlights the need for novel risk analysis methods for analyzing the security of AI-enabled connected health devices.
Related papers
- Systems-Theoretic and Data-Driven Security Analysis in ML-enabled Medical Devices [6.197430230611422]
We analyze publicly available data on device recalls and adverse events, and known vulnerabilities, to understand the threat landscape of AI/ML-enabled medical devices.<n>Our work aims to empower manufacturers to embed cybersecurity as a core design principle in AI/ML-enabled medical devices.
arXiv Detail & Related papers (2025-06-18T00:05:48Z) - On the Security Risks of ML-based Malware Detection Systems: A Survey [40.831924021306506]
Malware presents a persistent threat to user privacy and data integrity.<n>To combat this, machine learning-based (ML-based) malware detection (MD) systems have been developed.<n>These systems have increasingly been attacked in recent years, undermining their effectiveness in practice.
arXiv Detail & Related papers (2025-05-16T06:15:31Z) - An Approach to Technical AGI Safety and Security [72.83728459135101]
We develop an approach to address the risk of harms consequential enough to significantly harm humanity.
We focus on technical approaches to misuse and misalignment.
We briefly outline how these ingredients could be combined to produce safety cases for AGI systems.
arXiv Detail & Related papers (2025-04-02T15:59:31Z) - Securing Automated Insulin Delivery Systems: A Review of Security Threats and Protectives Strategies [12.306501785982018]
Automated insulin delivery (AID) systems have emerged as a significant technological advancement in diabetes care.
The reliance on wireless connectivity and software control has exposed AID systems to critical security risks.
Despite recent advancements, several open challenges remain in achieving secure AID systems.
arXiv Detail & Related papers (2025-03-18T08:11:19Z) - Open Problems in Machine Unlearning for AI Safety [61.43515658834902]
Machine unlearning -- the ability to selectively forget or suppress specific types of knowledge -- has shown promise for privacy and data removal tasks.
In this paper, we identify key limitations that prevent unlearning from serving as a comprehensive solution for AI safety.
arXiv Detail & Related papers (2025-01-09T03:59:10Z) - HAICOSYSTEM: An Ecosystem for Sandboxing Safety Risks in Human-AI Interactions [76.42274173122328]
We present HAICOSYSTEM, a framework examining AI agent safety within diverse and complex social interactions.
We run 1840 simulations based on 92 scenarios across seven domains (e.g., healthcare, finance, education)
Our experiments show that state-of-the-art LLMs, both proprietary and open-sourced, exhibit safety risks in over 50% cases.
arXiv Detail & Related papers (2024-09-24T19:47:21Z) - SoK: Security and Privacy Risks of Medical AI [14.592921477833848]
The integration of technology and healthcare has ushered in a new era where software systems, powered by artificial intelligence and machine learning, have become essential components of medical products and services.
This paper explores the security and privacy threats posed by AI/ML applications in healthcare.
arXiv Detail & Related papers (2024-09-11T16:59:58Z) - Vulnerability of Machine Learning Approaches Applied in IoT-based Smart Grid: A Review [51.31851488650698]
Machine learning (ML) sees an increasing prevalence of being used in the internet-of-things (IoT)-based smart grid.
adversarial distortion injected into the power signal will greatly affect the system's normal control and operation.
It is imperative to conduct vulnerability assessment for MLsgAPPs applied in the context of safety-critical power systems.
arXiv Detail & Related papers (2023-08-30T03:29:26Z) - Leveraging Traceability to Integrate Safety Analysis Artifacts into the
Software Development Process [51.42800587382228]
Safety assurance cases (SACs) can be challenging to maintain during system evolution.
We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models.
We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety.
arXiv Detail & Related papers (2023-07-14T16:03:27Z) - Harnessing the Speed and Accuracy of Machine Learning to Advance Cybersecurity [0.0]
Traditional signature-based methods of malware detection have limitations in detecting complex threats.
In recent years, machine learning has emerged as a promising solution to detect malware effectively.
ML algorithms are capable of analyzing large datasets and identifying patterns that are difficult for humans to identify.
arXiv Detail & Related papers (2023-02-24T02:42:38Z) - Foveate, Attribute, and Rationalize: Towards Physically Safe and
Trustworthy AI [76.28956947107372]
Covertly unsafe text is an area of particular interest, as such text may arise from everyday scenarios and are challenging to detect as harmful.
We propose FARM, a novel framework leveraging external knowledge for trustworthy rationale generation in the context of safety.
Our experiments show that FARM obtains state-of-the-art results on the SafeText dataset, showing absolute improvement in safety classification accuracy by 5.9%.
arXiv Detail & Related papers (2022-12-19T17:51:47Z) - Towards Developing Safety Assurance Cases for Learning-Enabled Medical
Cyber-Physical Systems [3.098385261166847]
We develop a safety assurance case for Machine Learning controllers in learning-enabled MCPS.
We provide a detailed analysis by implementing a deep neural network for the prediction in Artificial Pancreas Systems.
We check the sufficiency of the ML data and analyze the correctness of the ML-based prediction using formal verification.
arXiv Detail & Related papers (2022-11-23T22:43:48Z) - System Safety Engineering for Social and Ethical ML Risks: A Case Study [0.5249805590164902]
Governments, industry, and academia have undertaken efforts to identify and mitigate harms in ML-driven systems.
Existing approaches are largely disjointed, ad-hoc and of unknown effectiveness.
We focus in particular on how this analysis can extend to identifying social and ethical risks and developing concrete design-level controls to mitigate them.
arXiv Detail & Related papers (2022-11-08T22:58:58Z) - Survey of Machine Learning Based Intrusion Detection Methods for
Internet of Medical Things [2.223733768286313]
Internet of Medical Things (IoMT) represents an application of the Internet of Things.
The sensitive and private nature of this data may represent a prime interest for attackers.
The use of traditional security methods on equipment that is limited in terms of storage and computing capacity is ineffective.
arXiv Detail & Related papers (2022-02-19T18:40:55Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.