Tackling COVID-19 through Responsible AI Innovation: Five Steps in the
Right Direction
- URL: http://arxiv.org/abs/2008.06755v1
- Date: Sat, 15 Aug 2020 17:26:48 GMT
- Title: Tackling COVID-19 through Responsible AI Innovation: Five Steps in the
Right Direction
- Authors: David Leslie
- Abstract summary: Innovations in data science and AI/ML have a central role to play in supporting global efforts to combat COVID-19.
To address these concerns, I offer five steps that need to be taken to encourage responsible research and innovation.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Innovations in data science and AI/ML have a central role to play in
supporting global efforts to combat COVID-19. The versatility of AI/ML
technologies enables scientists and technologists to address an impressively
broad range of biomedical, epidemiological, and socioeconomic challenges. This
wide-reaching scientific capacity, however, also raises a diverse array of
ethical challenges. The need for researchers to act quickly and globally in
tackling SARS-CoV-2 demands unprecedented practices of open research and
responsible data sharing at a time when innovation ecosystems are hobbled by
proprietary protectionism, inequality, and a lack of public trust. Moreover,
societally impactful interventions like digital contact tracing are raising
fears of surveillance creep and are challenging widely held commitments to
privacy, autonomy, and civil liberties. Prepandemic concerns that data-driven
innovations may function to reinforce entrenched dynamics of societal inequity
have likewise intensified given the disparate impact of the virus on vulnerable
social groups and the life-and-death consequences of biased and discriminatory
public health outcomes. To address these concerns, I offer five steps that need
to be taken to encourage responsible research and innovation. These provide a
practice-based path to responsible AI/ML design and discovery centered on open,
accountable, equitable, and democratically governed processes and products.
When taken from the start, these steps will not only enhance the capacity of
innovators to tackle COVID-19 responsibly, they will, more broadly, help to
better equip the data science and AI/ML community to cope with future pandemics
and to support a more humane, rational, and just society.
Related papers
- Persuasion with Large Language Models: a Survey [49.86930318312291]
Large Language Models (LLMs) have created new disruptive possibilities for persuasive communication.
In areas such as politics, marketing, public health, e-commerce, and charitable giving, such LLM Systems have already achieved human-level or even super-human persuasiveness.
Our survey suggests that the current and future potential of LLM-based persuasion poses profound ethical and societal risks.
arXiv Detail & Related papers (2024-11-11T10:05:52Z) - AI in Action: Accelerating Progress Towards the Sustainable Development Goals [4.09375125119842]
We draw on Google's internal and collaborative research, technical work, and social impact initiatives to show AI's potential to accelerate action on the UN's Sustainable Development Goals.
The paper highlights AI capabilities (including computer vision, generative AI, natural language processing, and multimodal AI) and showcases how AI is altering how we approach problem-solving across all 17 SDGs.
We then offer insights on AI development and deployment to drive bold and responsible innovation, enhance impact, close the accessibility gap, and ensure that everyone, everywhere, can benefit from AI.
arXiv Detail & Related papers (2024-07-02T23:25:27Z) - Control Risk for Potential Misuse of Artificial Intelligence in Science [85.91232985405554]
We aim to raise awareness of the dangers of AI misuse in science.
We highlight real-world examples of misuse in chemical science.
We propose a system called SciGuard to control misuse risks for AI models in science.
arXiv Detail & Related papers (2023-12-11T18:50:57Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - The Future of Fundamental Science Led by Generative Closed-Loop
Artificial Intelligence [67.70415658080121]
Recent advances in machine learning and AI are disrupting technological innovation, product development, and society as a whole.
AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access.
Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery.
arXiv Detail & Related papers (2023-07-09T21:16:56Z) - Proceedings of KDD 2020 Workshop on Data-driven Humanitarian Mapping:
Harnessing Human-Machine Intelligence for High-Stake Public Policy and
Resilience Planning [7.77561570119593]
Humanitarian challenges disproportionately impact vulnerable communities worldwide.
Despite these growing perils, there remains a notable paucity of data science research to scientifically inform equitable public policy decisions.
We propose the Data-driven Humanitarian Mapping Research Program to help fill this gap.
arXiv Detail & Related papers (2021-09-01T15:30:25Z) - Proceedings of KDD 2021 Workshop on Data-driven Humanitarian Mapping:
Harnessing Human-Machine Intelligence for High-Stake Public Policy and
Resilience Planning [10.76026718771657]
Humanitarian challenges disproportionately impact vulnerable communities worldwide.
Despite these growing perils, there remains a notable paucity of data science research to scientifically inform equitable public policy decisions.
We propose the Data-driven Humanitarian Mapping Research Program to help fill this gap.
arXiv Detail & Related papers (2021-08-31T22:41:14Z) - Learnings from Frontier Development Lab and SpaceML -- AI Accelerators
for NASA and ESA [57.06643156253045]
Research with AI and ML technologies lives in a variety of settings with often asynchronous goals and timelines.
We perform a case study of the Frontier Development Lab (FDL), an AI accelerator under a public-private partnership from NASA and ESA.
FDL research follows principled practices that are grounded in responsible development, conduct, and dissemination of AI research.
arXiv Detail & Related papers (2020-11-09T21:23:03Z) - A Survey on Applications of Artificial Intelligence in Fighting Against
COVID-19 [75.84689958489724]
The COVID-19 pandemic caused by the SARS-CoV-2 virus has spread rapidly worldwide, leading to a global outbreak.
As a powerful tool against COVID-19, artificial intelligence (AI) technologies are widely used in combating this pandemic.
This survey presents medical and AI researchers with a comprehensive view of the existing and potential applications of AI technology in combating COVID-19.
arXiv Detail & Related papers (2020-07-04T22:48:15Z) - The challenges of deploying artificial intelligence models in a rapidly
evolving pandemic [10.188172055060544]
We argue that both basic and applied research are essential to accelerate the potential of AI models.
This perspective may provide a glimpse into how the global scientific community should react to combat future disease outbreaks more effectively.
arXiv Detail & Related papers (2020-05-19T21:11:48Z) - Towards a framework for understanding societal and ethical implications
of Artificial Intelligence [2.28438857884398]
The objective of this paper is to identify the main societal and ethical challenges implied by a massive uptake of AI.
We have surveyed the literature for the most common challenges and classified them in seven groups: 1) Non-desired effects, 2) Liability, 3) Unknown consequences, 4) Relation people-robots, 5) Concentration of power and wealth, 6) Intentional bad uses, and 7) AI for weapons and warfare.
arXiv Detail & Related papers (2020-01-03T17:55:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.