Towards regulatory compliant lifecycle for AI-based medical devices in EU: Industry perspectives
- URL: http://arxiv.org/abs/2409.08006v1
- Date: Thu, 12 Sep 2024 12:51:38 GMT
- Title: Towards regulatory compliant lifecycle for AI-based medical devices in EU: Industry perspectives
- Authors: Tuomas Granlund, Vlad Stirbu, Tommi Mikkonen,
- Abstract summary: The European regulatory framework for medical device software development falls short of addressing AI-specific considerations.
This article proposes a model to bridge the gap by extending the general idea of AI lifecycle with regulatory activities relevant to AI-enabled medical systems.
- Score: 2.4742581572364126
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite the immense potential of AI-powered medical devices to revolutionize healthcare, concerns regarding their safety in life-critical applications remain. While the European regulatory framework provides a comprehensive approach to medical device software development, it falls short in addressing AI-specific considerations. This article proposes a model to bridge this gap by extending the general idea of AI lifecycle with regulatory activities relevant to AI-enabled medical systems.
Related papers
- Transforming Medical Regulations into Numbers: Vectorizing a Decade of Medical Device Regulatory Shifts in the USA, EU, and China [3.8657431480664717]
Navigating the regulatory frameworks that ensure the safety and efficacy of medical devices can be challenging.
These frameworks often require redundant testing, slowing down the process of getting innovations to patients.
arXiv Detail & Related papers (2024-11-01T13:25:14Z) - Beyond One-Time Validation: A Framework for Adaptive Validation of Prognostic and Diagnostic AI-based Medical Devices [55.319842359034546]
Existing approaches often fall short in addressing the complexity of practically deploying these devices.
The presented framework emphasizes the importance of repeating validation and fine-tuning during deployment.
It is positioned within the current US and EU regulatory landscapes.
arXiv Detail & Related papers (2024-09-07T11:13:52Z) - How Could Generative AI Support Compliance with the EU AI Act? A Review for Safe Automated Driving Perception [4.075971633195745]
Deep Neural Networks (DNNs) have become central for the perception functions of autonomous vehicles.
The European Union (EU) Artificial Intelligence (AI) Act aims to address these challenges by establishing stringent norms and standards for AI systems.
This review paper summarizes the requirements arising from the EU AI Act regarding DNN-based perception systems and systematically categorizes existing generative AI applications in AD.
arXiv Detail & Related papers (2024-08-30T12:01:06Z) - Aligning XAI with EU Regulations for Smart Biomedical Devices: A Methodology for Compliance Analysis [0.7373617024876725]
This study fills a crucial gap in aligning XAI applications in bioelectronics with stringent provisions of EU regulations.
It provides a practical framework for developers and researchers, ensuring their AI innovations adhere to legal and ethical standards.
arXiv Detail & Related papers (2024-08-27T14:59:27Z) - The risks of risk-based AI regulation: taking liability seriously [46.90451304069951]
The development and regulation of AI seems to have reached a critical stage.
Some experts are calling for a moratorium on the training of AI systems more powerful than GPT-4.
This paper analyses the most advanced legal proposal, the European Union's AI Act.
arXiv Detail & Related papers (2023-11-03T12:51:37Z) - FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare [73.78776682247187]
Concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI.
This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare.
arXiv Detail & Related papers (2023-08-11T10:49:05Z) - A Revolution of Personalized Healthcare: Enabling Human Digital Twin
with Mobile AIGC [54.74071593520785]
Mobile AIGC can be a key enabling technology for an emerging application, called human digital twin (HDT)
HDT empowered by the mobile AIGC is expected to revolutionize the personalized healthcare by generating rare disease data, modeling high-fidelity digital twin, building versatile testbeds, and providing 24/7 customized medical services.
arXiv Detail & Related papers (2023-07-22T15:59:03Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Towards a framework for evaluating the safety, acceptability and
efficacy of AI systems for health: an initial synthesis [0.2936007114555107]
We aim to set out a minimally viable framework for evaluating the safety, acceptability and efficacy of AI systems for healthcare.
We do this by conducting a systematic search across Scopus, PubMed and Google Scholar.
The result is a framework to guide AI system developers, policymakers, and regulators through a sufficient evaluation of an AI system designed for use in healthcare.
arXiv Detail & Related papers (2021-04-14T15:00:39Z) - OnRAMP for Regulating AI in Medical Products [0.0]
This Perspective proposes best practice guidelines for development compatible with the production of a regulatory package.
These guidelines will allow all parties to communicate more clearly in the development of a common Good Machine Learning Practice.
arXiv Detail & Related papers (2020-10-09T14:02:30Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.