Organizational Governance of Emerging Technologies: AI Adoption in
Healthcare
- URL: http://arxiv.org/abs/2304.13081v2
- Date: Wed, 10 May 2023 20:27:23 GMT
- Title: Organizational Governance of Emerging Technologies: AI Adoption in
Healthcare
- Authors: Jee Young Kim, William Boag, Freya Gulamali, Alifia Hasan, Henry David
Jeffry Hogg, Mark Lifson, Deirdre Mulligan, Manesh Patel, Inioluwa Deborah
Raji, Ajai Sehgal, Keo Shaw, Danny Tobey, Alexandra Valladares, David Vidal,
Suresh Balu, Mark Sendak
- Abstract summary: The Health AI Partnership aims to better define the requirements for adequate organizational governance of AI systems in healthcare settings.
This is one of the most detailed qualitative analyses to date of the current governance structures and processes involved in AI adoption by health systems in the United States.
We hope these findings can inform future efforts to build capabilities to promote the safe, effective, and responsible adoption of emerging technologies in healthcare.
- Score: 43.02293389682218
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Private and public sector structures and norms refine how emerging technology
is used in practice. In healthcare, despite a proliferation of AI adoption, the
organizational governance surrounding its use and integration is often poorly
understood. What the Health AI Partnership (HAIP) aims to do in this research
is to better define the requirements for adequate organizational governance of
AI systems in healthcare settings and support health system leaders to make
more informed decisions around AI adoption. To work towards this understanding,
we first identify how the standards for the AI adoption in healthcare may be
designed to be used easily and efficiently. Then, we map out the precise
decision points involved in the practical institutional adoption of AI
technology within specific health systems. Practically, we achieve this through
a multi-organizational collaboration with leaders from major health systems
across the United States and key informants from related fields. Working with
the consultancy IDEO [dot] org, we were able to conduct usability-testing
sessions with healthcare and AI ethics professionals. Usability analysis
revealed a prototype structured around mock key decision points that align with
how organizational leaders approach technology adoption. Concurrently, we
conducted semi-structured interviews with 89 professionals in healthcare and
other relevant fields. Using a modified grounded theory approach, we were able
to identify 8 key decision points and comprehensive procedures throughout the
AI adoption lifecycle. This is one of the most detailed qualitative analyses to
date of the current governance structures and processes involved in AI adoption
by health systems in the United States. We hope these findings can inform
future efforts to build capabilities to promote the safe, effective, and
responsible adoption of emerging technologies in healthcare.
Related papers
- Towards Clinical AI Fairness: Filling Gaps in the Puzzle [15.543248260582217]
This review systematically pinpoints several deficiencies concerning both healthcare data and the provided AI fairness solutions.
We highlight the scarcity of research on AI fairness in many medical domains where AI technology is increasingly utilized.
To bridge these gaps, our review advances actionable strategies for both the healthcare and AI research communities.
arXiv Detail & Related papers (2024-05-28T07:42:55Z) - FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare [73.78776682247187]
Concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI.
This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare.
arXiv Detail & Related papers (2023-08-11T10:49:05Z) - The Design and Implementation of a National AI Platform for Public
Healthcare in Italy: Implications for Semantics and Interoperability [62.997667081978825]
The Italian National Health Service is adopting Artificial Intelligence through its technical agencies.
Such a vast programme requires special care in formalising the knowledge domain.
Questions have been raised about the impact that AI could have on patients, practitioners, and health systems.
arXiv Detail & Related papers (2023-04-24T08:00:02Z) - Information Governance as a Socio-Technical Process in the Development
of Trustworthy Healthcare AI [0.0]
Information Governance (IG) processes govern the use of personal confidential data.
Legal basis for data sharing is explicit only for the purpose of delivering patient care.
IG work should start early in the design life cycle and will likely continue throughout.
arXiv Detail & Related papers (2023-01-04T10:21:46Z) - Holding AI to Account: Challenges for the Delivery of Trustworthy AI in
Healthcare [8.351355707564153]
We examine the problem of trustworthy AI and explore what delivering this means in practice.
We argue here that this overlooks the important part played by organisational accountability in how people reason about and trust AI in socio-technical settings.
arXiv Detail & Related papers (2022-11-29T18:22:23Z) - MedPerf: Open Benchmarking Platform for Medical Artificial Intelligence
using Federated Evaluation [110.31526448744096]
We argue that unlocking this potential requires a systematic way to measure the performance of medical AI models on large-scale heterogeneous data.
We are building MedPerf, an open framework for benchmarking machine learning in the medical domain.
arXiv Detail & Related papers (2021-09-29T18:09:41Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Towards a framework for evaluating the safety, acceptability and
efficacy of AI systems for health: an initial synthesis [0.2936007114555107]
We aim to set out a minimally viable framework for evaluating the safety, acceptability and efficacy of AI systems for healthcare.
We do this by conducting a systematic search across Scopus, PubMed and Google Scholar.
The result is a framework to guide AI system developers, policymakers, and regulators through a sufficient evaluation of an AI system designed for use in healthcare.
arXiv Detail & Related papers (2021-04-14T15:00:39Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.