Taxonomy of AISecOps Threat Modeling for Cloud Based Medical Chatbots
- URL: http://arxiv.org/abs/2305.11189v1
- Date: Thu, 18 May 2023 02:30:24 GMT
- Title: Taxonomy of AISecOps Threat Modeling for Cloud Based Medical Chatbots
- Authors: Ruby Annette J, Aisha Banu, Sharon Priya S, Subash Chandran
- Abstract summary: This work is focused on applying the STRIDE threat modeling framework to model the possible threats involved in each component of the medical chatbots.
This threat modeling framework is tailored to the medical chatbots that involves sensitive data sharing.
It could also be applied for chatbots used in other sectors like the financial services, public sector, and government sectors that are concerned with security and compliance.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial Intelligence (AI) is playing a vital role in all aspects of
technology including cyber security. Application of Conversational AI like the
chatbots are also becoming very popular in the medical field to provide timely
and immediate medical assistance to patients in need. As medical chatbots deal
with a lot of sensitive information, the security of these chatbots is crucial.
To secure the confidentiality, integrity, and availability of cloud-hosted
assets like these, medical chatbots can be monitored using AISecOps (Artificial
Intelligence for Secure IT Operations). AISecOPs is an emerging field that
integrates three different but interrelated domains like the IT operation, AI,
and security as one domain, where the expertise from all these three domains
are used cohesively to secure the cyber assets. It considers cloud operations
and security in a holistic framework to collect the metrics required to assess
the security threats and train the AI models to take immediate actions. This
work is focused on applying the STRIDE threat modeling framework to model the
possible threats involved in each component of the chatbot to enable the
automatic threat detection using the AISecOps techniques. This threat modeling
framework is tailored to the medical chatbots that involves sensitive data
sharing but could also be applied for chatbots used in other sectors like the
financial services, public sector, and government sectors that are concerned
with security and compliance.
Related papers
- IntellBot: Retrieval Augmented LLM Chatbot for Cyber Threat Knowledge Delivery [10.937956959186472]
IntellBot is an advanced cyber security built on top of cutting-edge technologies like Large Language Models and Langchain.
It gathers information from diverse data sources to create a comprehensive knowledge base covering known vulnerabilities, recent cyber attacks, and emerging threats.
It delivers tailored responses, serving as a primary hub for cyber security insights.
arXiv Detail & Related papers (2024-11-08T09:40:53Z) - Countering Autonomous Cyber Threats [40.00865970939829]
Foundation Models present dual-use concerns broadly and within the cyber domain specifically.
Recent research has shown the potential for these advanced models to inform or independently execute offensive cyberspace operations.
This work evaluates several state-of-the-art FMs on their ability to compromise machines in an isolated network and investigates defensive mechanisms to defeat such AI-powered attacks.
arXiv Detail & Related papers (2024-10-23T22:46:44Z) - HAICOSYSTEM: An Ecosystem for Sandboxing Safety Risks in Human-AI Interactions [76.42274173122328]
We present HAICOSYSTEM, a framework examining AI agent safety within diverse and complex social interactions.
We run 1840 simulations based on 92 scenarios across seven domains (e.g., healthcare, finance, education)
Our experiments show that state-of-the-art LLMs, both proprietary and open-sourced, exhibit safety risks in over 50% cases.
arXiv Detail & Related papers (2024-09-24T19:47:21Z) - SoK: Security and Privacy Risks of Medical AI [14.592921477833848]
The integration of technology and healthcare has ushered in a new era where software systems, powered by artificial intelligence and machine learning, have become essential components of medical products and services.
This paper explores the security and privacy threats posed by AI/ML applications in healthcare.
arXiv Detail & Related papers (2024-09-11T16:59:58Z) - Is Generative AI the Next Tactical Cyber Weapon For Threat Actors? Unforeseen Implications of AI Generated Cyber Attacks [0.0]
This paper delves into the escalating threat posed by the misuse of AI, specifically through the use of Large Language Models (LLMs)
Through a series of controlled experiments, the paper demonstrates how these models can be manipulated to bypass ethical and privacy safeguards to effectively generate cyber attacks.
We also introduce Occupy AI, a customized, finetuned LLM specifically engineered to automate and execute cyberattacks.
arXiv Detail & Related papers (2024-08-23T02:56:13Z) - The Security and Privacy of Mobile Edge Computing: An Artificial Intelligence Perspective [64.36680481458868]
Mobile Edge Computing (MEC) is a new computing paradigm that enables cloud computing and information technology (IT) services to be delivered at the network's edge.
This paper provides a survey of security and privacy in MEC from the perspective of Artificial Intelligence (AI)
We focus on new security and privacy issues, as well as potential solutions from the viewpoints of AI.
arXiv Detail & Related papers (2024-01-03T07:47:22Z) - Cyber Sentinel: Exploring Conversational Agents in Streamlining Security Tasks with GPT-4 [0.08192907805418582]
This paper introduces Cyber Sentinel, an innovative task-oriented cybersecurity dialogue system.
It embodies the fusion of artificial intelligence, cybersecurity domain expertise, and real-time data analysis to combat the multifaceted challenges posed by cyber adversaries.
Our work is a novel approach to task-oriented dialogue systems, leveraging the power of chaining GPT-4 models combined with prompt engineering.
arXiv Detail & Related papers (2023-09-28T13:18:33Z) - Assistive Chatbots for healthcare: a succinct review [0.0]
The focus on AI-enabled technology is because of its potential for enhancing the quality of human-machine interaction.
There is a lack of trust on this technology regarding patient safety and data protection.
Patients have expressed dissatisfaction with Natural Language Processing skills.
arXiv Detail & Related papers (2023-08-08T10:35:25Z) - Towards Healthy AI: Large Language Models Need Therapists Too [41.86344997530743]
We define Healthy AI to be safe, trustworthy and ethical.
We present the SafeguardGPT framework that uses psychotherapy to correct for these harmful behaviors.
arXiv Detail & Related papers (2023-04-02T00:39:12Z) - Proceedings of the Artificial Intelligence for Cyber Security (AICS)
Workshop at AAAI 2022 [55.573187938617636]
The workshop will focus on the application of AI to problems in cyber security.
Cyber systems generate large volumes of data, utilizing this effectively is beyond human capabilities.
arXiv Detail & Related papers (2022-02-28T18:27:41Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.