Bias Impact Analysis of AI in Consumer Mobile Health Technologies:
Legal, Technical, and Policy
- URL: http://arxiv.org/abs/2209.05440v1
- Date: Mon, 29 Aug 2022 00:15:45 GMT
- Title: Bias Impact Analysis of AI in Consumer Mobile Health Technologies:
Legal, Technical, and Policy
- Authors: Kristine Gloria, Nidhi Rastogi, Stevie DeGroff
- Abstract summary: This work examines the intersection of algorithmic bias in consumer mobile health technologies (mHealth)
We explore what extent current mechanisms - legal, technical, and or normative - help mitigate potential risks associated with unwanted bias.
We provide additional guidance on the role and responsibilities technologists and policymakers have to ensure that such systems empower patients equitably.
- Score: 1.6114012813668934
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Today's large-scale algorithmic and automated deployment of decision-making
systems threatens to exclude marginalized communities. Thus, the emergent
danger comes from the effectiveness and the propensity of such systems to
replicate, reinforce, or amplify harmful existing discriminatory acts.
Algorithmic bias exposes a deeply entrenched encoding of a range of unwanted
biases that can have profound real-world effects that manifest in domains from
employment, to housing, to healthcare. The last decade of research and examples
on these effects further underscores the need to examine any claim of a
value-neutral technology. This work examines the intersection of algorithmic
bias in consumer mobile health technologies (mHealth). We include mHealth, a
term used to describe mobile technology and associated sensors to provide
healthcare solutions through patient journeys. We also include mental and
behavioral health (mental and physiological) as part of our study. Furthermore,
we explore to what extent current mechanisms - legal, technical, and or
normative - help mitigate potential risks associated with unwanted bias in
intelligent systems that make up the mHealth domain. We provide additional
guidance on the role and responsibilities technologists and policymakers have
to ensure that such systems empower patients equitably.
Related papers
- Persuasion with Large Language Models: a Survey [49.86930318312291]
Large Language Models (LLMs) have created new disruptive possibilities for persuasive communication.
In areas such as politics, marketing, public health, e-commerce, and charitable giving, such LLM Systems have already achieved human-level or even super-human persuasiveness.
Our survey suggests that the current and future potential of LLM-based persuasion poses profound ethical and societal risks.
arXiv Detail & Related papers (2024-11-11T10:05:52Z) - SoK: Security and Privacy Risks of Medical AI [14.592921477833848]
The integration of technology and healthcare has ushered in a new era where software systems, powered by artificial intelligence and machine learning, have become essential components of medical products and services.
This paper explores the security and privacy threats posed by AI/ML applications in healthcare.
arXiv Detail & Related papers (2024-09-11T16:59:58Z) - AI-Driven Healthcare: A Survey on Ensuring Fairness and Mitigating Bias [2.398440840890111]
AI applications have significantly improved diagnostic accuracy, treatment personalization, and patient outcome predictions.
These advancements also introduce substantial ethical and fairness challenges.
These biases can lead to disparities in healthcare delivery, affecting diagnostic accuracy and treatment outcomes across different demographic groups.
arXiv Detail & Related papers (2024-07-29T02:39:17Z) - Applications of Generative AI in Healthcare: algorithmic, ethical, legal and societal considerations [0.0]
Generative AI is rapidly transforming medical imaging and text analysis.
This paper explores issues of accuracy, informed consent, data privacy, and algorithmic limitations.
We aim to foster a roadmap for ethical and responsible implementation of generative AI in healthcare.
arXiv Detail & Related papers (2024-06-15T13:28:07Z) - Mitigating Covertly Unsafe Text within Natural Language Systems [55.26364166702625]
Uncontrolled systems may generate recommendations that lead to injury or life-threatening consequences.
In this paper, we distinguish types of text that can lead to physical harm and establish one particularly underexplored category: covertly unsafe text.
arXiv Detail & Related papers (2022-10-17T17:59:49Z) - The Medkit-Learn(ing) Environment: Medical Decision Modelling through
Simulation [81.72197368690031]
We present a new benchmarking suite designed specifically for medical sequential decision making.
The Medkit-Learn(ing) Environment is a publicly available Python package providing simple and easy access to high-fidelity synthetic medical data.
arXiv Detail & Related papers (2021-06-08T10:38:09Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Edge Intelligence for Empowering IoT-based Healthcare Systems [42.909808437026136]
This article highlights the benefits of edge intelligent technology, along with AI in smart healthcare systems.
A novel smart healthcare model is proposed to boost the utilization of AI and edge technology in smart healthcare systems.
arXiv Detail & Related papers (2021-03-22T19:35:06Z) - Overcoming Failures of Imagination in AI Infused System Development and
Deployment [71.9309995623067]
NeurIPS 2020 requested that research paper submissions include impact statements on "potential nefarious uses and the consequences of failure"
We argue that frameworks of harms must be context-aware and consider a wider range of potential stakeholders, system affordances, as well as viable proxies for assessing harms in the widest sense.
arXiv Detail & Related papers (2020-11-26T18:09:52Z) - COVI White Paper [67.04578448931741]
Contact tracing is an essential tool to change the course of the Covid-19 pandemic.
We present an overview of the rationale, design, ethical considerations and privacy strategy of COVI,' a Covid-19 public peer-to-peer contact tracing and risk awareness mobile application developed in Canada.
arXiv Detail & Related papers (2020-05-18T07:40:49Z) - The Risk to Population Health Equity Posed by Automated Decision
Systems: A Narrative Review [0.0]
Automated decisions being made have significant consequences for individual and population health.
Reports of issues arising from their use in health are already appearing.
There is a significant risk that use of automated decision systems in health will exacerbate existing population health inequities.
arXiv Detail & Related papers (2020-01-18T06:52:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.