The Risk to Population Health Equity Posed by Automated Decision
Systems: A Narrative Review
- URL: http://arxiv.org/abs/2001.06615v2
- Date: Thu, 20 Jan 2022 05:25:43 GMT
- Title: The Risk to Population Health Equity Posed by Automated Decision
Systems: A Narrative Review
- Authors: Mitchell Burger
- Abstract summary: Automated decisions being made have significant consequences for individual and population health.
Reports of issues arising from their use in health are already appearing.
There is a significant risk that use of automated decision systems in health will exacerbate existing population health inequities.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Artificial intelligence is already ubiquitous, and is increasingly being used
to autonomously make ever more consequential decisions. However, there has been
relatively little research into the existing and possible consequences for
population health equity. A narrative review was undertaken using a hermeneutic
approach to explore current and future uses of narrow AI and automated decision
systems (ADS) in medicine and public health, issues that have emerged, and
implications for equity. Accounts reveal a tremendous expectation on AI to
transform medical and public health practices. Prominent demonstrations of AI
capability - particularly in diagnostic decision making, risk prediction, and
surveillance - are stimulating rapid adoption, spurred by COVID-19. Automated
decisions being made have significant consequences for individual and
population health and wellbeing. Meanwhile, it is evident that hazards
including bias, incontestability, and privacy erosion have emerged in sensitive
domains such as criminal justice where narrow AI and ADS are in common use.
Reports of issues arising from their use in health are already appearing. As
the use of ADS in health expands, it is probable that these hazards will
manifest more widely. Bias, incontestability, and privacy erosion give rise to
mechanisms by which existing social, economic and health disparities are
perpetuated and amplified. Consequently, there is a significant risk that use
of ADS in health will exacerbate existing population health inequities. The
industrial scale and rapidity with which ADS can be applied heightens the risk
to population health equity. It is incumbent on health practitioners and policy
makers therefore to explore the potential implications of using ADS, to ensure
the use of artificial intelligence promotes population health and equity.
Related papers
- Persuasion with Large Language Models: a Survey [49.86930318312291]
Large Language Models (LLMs) have created new disruptive possibilities for persuasive communication.
In areas such as politics, marketing, public health, e-commerce, and charitable giving, such LLM Systems have already achieved human-level or even super-human persuasiveness.
Our survey suggests that the current and future potential of LLM-based persuasion poses profound ethical and societal risks.
arXiv Detail & Related papers (2024-11-11T10:05:52Z) - Artificial Intelligence for Public Health Surveillance in Africa: Applications and Opportunities [0.0]
This paper investigates the applications of AI in public health surveillance across the continent.
Our paper highlights AI's potential to enhance disease monitoring and health outcomes.
Key barriers to the widespread adoption of AI in African public health systems have been identified.
arXiv Detail & Related papers (2024-08-05T15:48:51Z) - Control Risk for Potential Misuse of Artificial Intelligence in Science [85.91232985405554]
We aim to raise awareness of the dangers of AI misuse in science.
We highlight real-world examples of misuse in chemical science.
We propose a system called SciGuard to control misuse risks for AI models in science.
arXiv Detail & Related papers (2023-12-11T18:50:57Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - From Military to Healthcare: Adopting and Expanding Ethical Principles
for Generative Artificial Intelligence [10.577932700903112]
Generative AI, an emerging technology designed to efficiently generate valuable information, holds great promise.
We propose GREAT PLEA ethical principles, encompassing governance, reliability, equity, accountability, traceability, privacy, lawfulness, empathy, and autonomy, for generative AI in healthcare.
arXiv Detail & Related papers (2023-08-04T16:22:06Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - Bias Impact Analysis of AI in Consumer Mobile Health Technologies:
Legal, Technical, and Policy [1.6114012813668934]
This work examines the intersection of algorithmic bias in consumer mobile health technologies (mHealth)
We explore what extent current mechanisms - legal, technical, and or normative - help mitigate potential risks associated with unwanted bias.
We provide additional guidance on the role and responsibilities technologists and policymakers have to ensure that such systems empower patients equitably.
arXiv Detail & Related papers (2022-08-29T00:15:45Z) - Fairness via AI: Bias Reduction in Medical Information [3.254836540242099]
We propose a novel framework of Fairness via AI, inspired by insights from medical education, sociology and antiracism.
We propose using AI to study, detect and mitigate biased, harmful, and/or false health information that disproportionately hurts minority groups in society.
arXiv Detail & Related papers (2021-09-06T01:39:48Z) - Overcoming Failures of Imagination in AI Infused System Development and
Deployment [71.9309995623067]
NeurIPS 2020 requested that research paper submissions include impact statements on "potential nefarious uses and the consequences of failure"
We argue that frameworks of harms must be context-aware and consider a wider range of potential stakeholders, system affordances, as well as viable proxies for assessing harms in the widest sense.
arXiv Detail & Related papers (2020-11-26T18:09:52Z) - The Threats of Artificial Intelligence Scale (TAI). Development,
Measurement and Test Over Three Application Domains [0.0]
Several opinion polls frequently query the public fear of autonomous robots and artificial intelligence (FARAI)
We propose a fine-grained scale to measure threat perceptions of AI that accounts for four functional classes of AI systems and is applicable to various domains of AI applications.
The data support the dimensional structure of the proposed Threats of AI (TAI) scale as well as the internal consistency and factoral validity of the indicators.
arXiv Detail & Related papers (2020-06-12T14:15:02Z) - COVI White Paper [67.04578448931741]
Contact tracing is an essential tool to change the course of the Covid-19 pandemic.
We present an overview of the rationale, design, ethical considerations and privacy strategy of COVI,' a Covid-19 public peer-to-peer contact tracing and risk awareness mobile application developed in Canada.
arXiv Detail & Related papers (2020-05-18T07:40:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.