The whack-a-mole governance challenge for AI-enabled synthetic biology:
literature review and emerging frameworks
- URL: http://arxiv.org/abs/2402.00312v1
- Date: Thu, 1 Feb 2024 03:53:13 GMT
- Title: The whack-a-mole governance challenge for AI-enabled synthetic biology:
literature review and emerging frameworks
- Authors: Trond Arne Undheim
- Abstract summary: AI-enabled synthetic biology has tremendous potential but also significantly increases biorisks.
How to achieve early warning systems that enable prevention and mitigation of future AI-enabled biohazards from the lab will constantly need to evolve.
Recent advances in chatbots enabled by generative AI have revived fears that advanced biological insight can more easily get into the hands of malignant individuals or organizations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: AI-enabled synthetic biology has tremendous potential but also significantly
increases biorisks and brings about a new set of dual use concerns. The picture
is complicated given the vast innovations envisioned to emerge by combining
emerging technologies, as AI-enabled synthetic biology potentially scales up
bioengineering into industrial biomanufacturing. However, the literature review
indicates that goals such as maintaining a reasonable scope for innovation, or
more ambitiously to foster a huge bioeconomy don't necessarily contrast with
biosafety, but need to go hand in hand. This paper presents a literature review
of the issues and describes emerging frameworks for policy and practice that
transverse the options of command-and control, stewardship, bottom-up, and
laissez-faire governance. How to achieve early warning systems that enable
prevention and mitigation of future AI-enabled biohazards from the lab, from
deliberate misuse, or from the public realm, will constantly need to evolve,
and adaptive, interactive approaches should emerge. Although biorisk is subject
to an established governance regime, and scientists generally adhere to
biosafety protocols, even experimental, but legitimate use by scientists could
lead to unexpected developments. Recent advances in chatbots enabled by
generative AI have revived fears that advanced biological insight can more
easily get into the hands of malignant individuals or organizations. Given
these sets of issues, society needs to rethink how AI-enabled synthetic biology
should be governed. The suggested way to visualize the challenge at hand is
whack-a-mole governance, although the emerging solutions are perhaps not so
different either.
Related papers
- Prioritizing High-Consequence Biological Capabilities in Evaluations of Artificial Intelligence Models [0.0]
We argue that AI evaluations model should prioritize addressing high-consequence risks.
These risks could cause large-scale harm to the public, such as pandemics.
Scientists' experience with identifying and mitigating dual-use biological risks can help inform new approaches to evaluating biological AI models.
arXiv Detail & Related papers (2024-05-25T16:29:17Z) - Biospheric AI [0.0]
We propose a new paradigm -- Biospheric AI that assumes an ecocentric perspective.
This work attempts to take first steps towards a comprehensive program of research that focuses on the interactions between AI and the biosphere.
arXiv Detail & Related papers (2024-01-31T13:04:34Z) - Towards Risk Analysis of the Impact of AI on the Deliberate Biological Threat Landscape [0.0]
The perception that the convergence of biological engineering and artificial intelligence could enable increased biorisk has drawn attention to the governance of biotechnology and artificial intelligence.
The 2023 Executive Order, Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, requires an assessment of how artificial intelligence can increase biorisk.
The perspective concludes by noting that assessment and evaluation methodologies must keep pace with advances of AI in the life sciences.
arXiv Detail & Related papers (2024-01-23T13:35:16Z) - Control Risk for Potential Misuse of Artificial Intelligence in Science [85.91232985405554]
We aim to raise awareness of the dangers of AI misuse in science.
We highlight real-world examples of misuse in chemical science.
We propose a system called SciGuard to control misuse risks for AI models in science.
arXiv Detail & Related papers (2023-12-11T18:50:57Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - The Promise and Peril of Artificial Intelligence -- Violet Teaming
Offers a Balanced Path Forward [56.16884466478886]
This paper reviews emerging issues with opaque and uncontrollable AI systems.
It proposes an integrative framework called violet teaming to develop reliable and responsible AI.
It emerged from AI safety research to manage risks proactively by design.
arXiv Detail & Related papers (2023-08-28T02:10:38Z) - The Future of Fundamental Science Led by Generative Closed-Loop
Artificial Intelligence [67.70415658080121]
Recent advances in machine learning and AI are disrupting technological innovation, product development, and society as a whole.
AI has contributed less to fundamental science in part because large data sets of high-quality data for scientific practice and model discovery are more difficult to access.
Here we explore and investigate aspects of an AI-driven, automated, closed-loop approach to scientific discovery.
arXiv Detail & Related papers (2023-07-09T21:16:56Z) - Developing an NLP-based Recommender System for the Ethical, Legal, and
Social Implications of Synthetic Biology [0.0]
Synthetic biology involves the engineering and re-design of organisms for purposes such as food security, health, and environmental protection.
It poses numerous ethical, legal, and social implications (ELSI) for researchers and policy makers.
Various efforts have sought to embed social scientists and ethicists on synthetic biology projects.
This text proposes a different approach, asking is it possible to develop a well-performing recommender model based upon natural language processing (NLP) to connect synthetic biologists with information on the ELSI of their specific research?
arXiv Detail & Related papers (2022-07-10T20:18:53Z) - Seeing biodiversity: perspectives in machine learning for wildlife
conservation [49.15793025634011]
We argue that machine learning can meet this analytic challenge to enhance our understanding, monitoring capacity, and conservation of wildlife species.
In essence, by combining new machine learning approaches with ecological domain knowledge, animal ecologists can capitalize on the abundance of data generated by modern sensor technologies.
arXiv Detail & Related papers (2021-10-25T13:40:36Z) - The Short Anthropological Guide to the Study of Ethical AI [91.3755431537592]
Short guide serves as both an introduction to AI ethics and social science and anthropological perspectives on the development of AI.
Aims to provide those unfamiliar with the field with an insight into the societal impact of AI systems and how, in turn, these systems can lead us to rethink how our world operates.
arXiv Detail & Related papers (2020-10-07T12:25:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.