AI for Regulatory Affairs: Balancing Accuracy, Interpretability, and Computational Cost in Medical Device Classification
- URL: http://arxiv.org/abs/2505.18695v1
- Date: Sat, 24 May 2025 13:41:20 GMT
- Title: AI for Regulatory Affairs: Balancing Accuracy, Interpretability, and Computational Cost in Medical Device Classification
- Authors: Yu Han, Aaron Ceross, Jeroen H. M. Bergmann,
- Abstract summary: We investigate a broad range of AI models using a regulatory dataset of medical device descriptions.<n>We evaluate each model along three key dimensions: accuracy, interpretability, and computational cost.
- Score: 3.439579933384111
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Regulatory affairs, which sits at the intersection of medicine and law, can benefit significantly from AI-enabled automation. Classification task is the initial step in which manufacturers position their products to regulatory authorities, and it plays a critical role in determining market access, regulatory scrutiny, and ultimately, patient safety. In this study, we investigate a broad range of AI models -- including traditional machine learning (ML) algorithms, deep learning architectures, and large language models -- using a regulatory dataset of medical device descriptions. We evaluate each model along three key dimensions: accuracy, interpretability, and computational cost.
Related papers
- Does Machine Unlearning Truly Remove Model Knowledge? A Framework for Auditing Unlearning in LLMs [58.24692529185971]
We introduce a comprehensive auditing framework for unlearning evaluation comprising three benchmark datasets, six unlearning algorithms, and five prompt-based auditing methods.<n>We evaluate the effectiveness and robustness of different unlearning strategies.
arXiv Detail & Related papers (2025-05-29T09:19:07Z) - Towards a perturbation-based explanation for medical AI as differentiable programs [0.0]
In medicine and healthcare, there is a particular demand for sufficient and objective explainability of the outcome generated by AI models.<n>This work examines a numerical availability of the Jacobian matrix of deep learning models that measures how stably a model responses against small perturbations added to the input.<n>This is a first step towards a perturbation-based explanation, which will assist medical practitioners in understanding and interpreting the response of the AI model in its clinical application.
arXiv Detail & Related papers (2025-02-19T07:56:23Z) - Demystifying Large Language Models for Medicine: A Primer [50.83806796466396]
Large language models (LLMs) represent a transformative class of AI tools capable of revolutionizing various aspects of healthcare.
This tutorial aims to equip healthcare professionals with the tools necessary to effectively integrate LLMs into clinical practice.
arXiv Detail & Related papers (2024-10-24T15:41:56Z) - Aligning XAI with EU Regulations for Smart Biomedical Devices: A Methodology for Compliance Analysis [0.7373617024876725]
This study fills a crucial gap in aligning XAI applications in bioelectronics with stringent provisions of EU regulations.
It provides a practical framework for developers and researchers, ensuring their AI innovations adhere to legal and ethical standards.
arXiv Detail & Related papers (2024-08-27T14:59:27Z) - Selecting Interpretability Techniques for Healthcare Machine Learning models [69.65384453064829]
In healthcare there is a pursuit for employing interpretable algorithms to assist healthcare professionals in several decision scenarios.
We overview a selection of eight algorithms, both post-hoc and model-based, that can be used for such purposes.
arXiv Detail & Related papers (2024-06-14T17:49:04Z) - The Foundations of Computational Management: A Systematic Approach to
Task Automation for the Integration of Artificial Intelligence into Existing
Workflows [55.2480439325792]
This article introduces Computational Management, a systematic approach to task automation.
The article offers three easy step-by-step procedures to begin the process of implementing AI within a workflow.
arXiv Detail & Related papers (2024-02-07T01:45:14Z) - PMC-LLaMA: Towards Building Open-source Language Models for Medicine [62.39105735933138]
Large Language Models (LLMs) have showcased remarkable capabilities in natural language understanding.
LLMs struggle in domains that require precision, such as medical applications, due to their lack of domain-specific knowledge.
We describe the procedure for building a powerful, open-source language model specifically designed for medicine applications, termed as PMC-LLaMA.
arXiv Detail & Related papers (2023-04-27T18:29:05Z) - Validation of artificial intelligence containing products across the
regulated healthcare industries [0.0]
Introduction of artificial intelligence / machine learning (AI/ML) products to regulated fields poses new regulatory problems.
Lack of a common terminology and understanding leads to confusion, delays and product failures.
Validation as a key step in product development offers an opportune point of comparison for aligning people and processes.
arXiv Detail & Related papers (2023-02-13T14:03:36Z) - The Medkit-Learn(ing) Environment: Medical Decision Modelling through
Simulation [81.72197368690031]
We present a new benchmarking suite designed specifically for medical sequential decision making.
The Medkit-Learn(ing) Environment is a publicly available Python package providing simple and easy access to high-fidelity synthetic medical data.
arXiv Detail & Related papers (2021-06-08T10:38:09Z) - Towards Fairness Certification in Artificial Intelligence [31.920661197618195]
We propose a first joint effort to define the operational steps needed for AI fairness certification.
We will overview the criteria that should be met by an AI system before coming into official service and the conformity assessment procedures useful to monitor its functioning for fair decisions.
arXiv Detail & Related papers (2021-06-04T14:12:12Z) - Active learning for medical code assignment [55.99831806138029]
We demonstrate the effectiveness of Active Learning (AL) in multi-label text classification in the clinical domain.
We apply a set of well-known AL methods to help automatically assign ICD-9 codes on the MIMIC-III dataset.
Our results show that the selection of informative instances provides satisfactory classification with a significantly reduced training set.
arXiv Detail & Related papers (2021-04-12T18:11:17Z) - OnRAMP for Regulating AI in Medical Products [0.0]
This Perspective proposes best practice guidelines for development compatible with the production of a regulatory package.
These guidelines will allow all parties to communicate more clearly in the development of a common Good Machine Learning Practice.
arXiv Detail & Related papers (2020-10-09T14:02:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.