Towards Responsible Governance of Biological Design Tools
- URL: http://arxiv.org/abs/2311.15936v3
- Date: Thu, 30 Nov 2023 11:54:38 GMT
- Title: Towards Responsible Governance of Biological Design Tools
- Authors: Richard Moulange, Max Langenkamp, Tessa Alexanian, Samuel Curtis,
Morgan Livingston
- Abstract summary: Recent advancements in generative machine learning have enabled rapid progress in biological design tools (BDTs)
The unprecedented predictive accuracy and novel design capabilities of BDTs present new and significant dual-use risks.
Similar to other dual-use AI systems, BDTs present a wicked problem: how can regulators uphold public safety without stifling innovation?
We propose a range of measures to mitigate the risk that BDTs are misused, across the areas of responsible development, risk assessment, transparency, access management, cybersecurity, and investing in resilience.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advancements in generative machine learning have enabled rapid
progress in biological design tools (BDTs) such as protein structure and
sequence prediction models. The unprecedented predictive accuracy and novel
design capabilities of BDTs present new and significant dual-use risks. For
example, their predictive accuracy allows biological agents, whether vaccines
or pathogens, to be developed more quickly, while the design capabilities could
be used to discover drugs or evade DNA screening techniques. Similar to other
dual-use AI systems, BDTs present a wicked problem: how can regulators uphold
public safety without stifling innovation? We highlight how current regulatory
proposals that are primarily tailored toward large language models may be less
effective for BDTs, which require fewer computational resources to train and
are often developed in an open-source manner. We propose a range of measures to
mitigate the risk that BDTs are misused, across the areas of responsible
development, risk assessment, transparency, access management, cybersecurity,
and investing in resilience. Implementing such measures will require close
coordination between developers and governments.
Related papers
- Best Practices for Biorisk Evaluations on Open-Weight Bio-Foundation Models [24.414900360499548]
Open-weight bio-foundation models could enable bad actors to develop more deadly bioweapons.<n>Current approaches focus on filtering biohazardous data during pre-training.<n>BioRiskEval is a framework to evaluate the robustness of procedures intended to reduce the dual-use capabilities of bio-foundation models.
arXiv Detail & Related papers (2025-10-31T17:00:20Z) - Generative AI for Biosciences: Emerging Threats and Roadmap to Biosecurity [56.331312963880215]
generative artificial intelligence (GenAI) in the biosciences is transforming biotechnology, medicine, and synthetic biology.<n>This Perspective outlines the current state of GenAI in the biosciences and emerging threat vectors ranging from jailbreak attacks and privacy risks to the dual-use challenges posed by autonomous AI agents.<n>We advocate a multi-layered approach to GenAI safety, including rigorous data filtering, alignment with ethical principles during development, and real-time monitoring to block harmful requests.
arXiv Detail & Related papers (2025-10-13T00:24:41Z) - Your Agent May Misevolve: Emergent Risks in Self-evolving LLM Agents [58.69865074060139]
We study the case where an agent's self-evolution deviates in unintended ways, leading to undesirable or even harmful outcomes.<n>Our empirical findings reveal that misevolution is a widespread risk, affecting agents built even on top-tier LLMs.<n>We discuss potential mitigation strategies to inspire further research on building safer and more trustworthy self-evolving agents.
arXiv Detail & Related papers (2025-09-30T14:55:55Z) - Resilient Biosecurity in the Era of AI-Enabled Bioweapons [0.0]
Existing biosafety measures rely on sequence alignment and protein-protein interaction prediction to detect dangerous outputs.<n>We evaluate the performance of three leading PPI prediction tools: AlphaFold 3, AF3Complex, and SpatialPPIv2.<n>None of the tools successfully identify any of the four experimentally validated SARS-CoV-2 mutants with confirmed binding.
arXiv Detail & Related papers (2025-08-30T18:09:04Z) - An Approach to Technical AGI Safety and Security [72.83728459135101]
We develop an approach to address the risk of harms consequential enough to significantly harm humanity.
We focus on technical approaches to misuse and misalignment.
We briefly outline how these ingredients could be combined to produce safety cases for AGI systems.
arXiv Detail & Related papers (2025-04-02T15:59:31Z) - Generative Diffusion-based Contract Design for Efficient AI Twins Migration in Vehicular Embodied AI Networks [55.15079732226397]
Embodied AI is a rapidly advancing field that bridges the gap between cyberspace and physical space.
In VEANET, embodied AI twins act as in-vehicle AI assistants to perform diverse tasks supporting autonomous driving.
arXiv Detail & Related papers (2024-10-02T02:20:42Z) - Sustainable Diffusion-based Incentive Mechanism for Generative AI-driven Digital Twins in Industrial Cyber-Physical Systems [65.22300383287904]
Industrial Cyber-Physical Systems (ICPSs) are an integral component of modern manufacturing and industries.
By digitizing data throughout the product life cycle, Digital Twins (DTs) in ICPSs enable a shift from current industrial infrastructures to intelligent and adaptive infrastructures.
mechanisms that leverage sensing Industrial Internet of Things (IIoT) devices to share data for the construction of DTs are susceptible to adverse selection problems.
arXiv Detail & Related papers (2024-08-02T10:47:10Z) - BioDiscoveryAgent: An AI Agent for Designing Genetic Perturbation Experiments [112.25067497985447]
We introduce BioDiscoveryAgent, an agent that designs new experiments, reasons about their outcomes, and efficiently navigates the hypothesis space to reach desired solutions.
BioDiscoveryAgent can uniquely design new experiments without the need to train a machine learning model.
It achieves an average of 21% improvement in predicting relevant genetic perturbations across six datasets.
arXiv Detail & Related papers (2024-05-27T19:57:17Z) - Prioritizing High-Consequence Biological Capabilities in Evaluations of Artificial Intelligence Models [0.0]
We argue that AI evaluations model should prioritize addressing high-consequence risks.
These risks could cause large-scale harm to the public, such as pandemics.
Scientists' experience with identifying and mitigating dual-use biological risks can help inform new approaches to evaluating biological AI models.
arXiv Detail & Related papers (2024-05-25T16:29:17Z) - Risks and Opportunities of Open-Source Generative AI [64.86989162783648]
Applications of Generative AI (Gen AI) are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about the potential risks of the technology, and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source generative AI.
arXiv Detail & Related papers (2024-05-14T13:37:36Z) - Generative AI in Cybersecurity [0.0]
Generative Artificial Intelligence (GAI) has been pivotal in reshaping the field of data analysis, pattern recognition, and decision-making processes.
As GAI rapidly progresses, it outstrips the current pace of cybersecurity protocols and regulatory frameworks.
The study highlights the critical need for organizations to proactively identify and develop more complex defensive strategies to counter the sophisticated employment of GAI in malware creation.
arXiv Detail & Related papers (2024-05-02T19:03:11Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Artificial intelligence and biological misuse: Differentiating risks of
language models and biological design tools [0.0]
This article differentiates two classes of AI tools that could pose such biosecurity risks: large language models (LLMs) and biological design tools (BDTs)
arXiv Detail & Related papers (2023-06-24T12:48:49Z) - Safe AI for health and beyond -- Monitoring to transform a health
service [51.8524501805308]
We will assess the infrastructure required to monitor the outputs of a machine learning algorithm.
We will present two scenarios with examples of monitoring and updates of models.
arXiv Detail & Related papers (2023-03-02T17:27:45Z) - Liability regimes in the age of AI: a use-case driven analysis of the
burden of proof [1.7510020208193926]
New emerging technologies powered by Artificial Intelligence (AI) have the potential to disruptively transform our societies for the better.
But there is growing concerns about certain intrinsic characteristics of these methodologies that carry potential risks to both safety and fundamental rights.
This paper presents three case studies, as well as the methodology to reach them, that illustrate these difficulties.
arXiv Detail & Related papers (2022-11-03T13:55:36Z) - AIRSENSE-TO-ACT: A Concept Paper for COVID-19 Countermeasures based on
Artificial Intelligence algorithms and multi-sources Data Processing [0.0]
This paper describes a new tool to support institutions in the implementation of targeted countermeasures, based on quantitative and multi-scale elements, for the fight and prevention of emergencies, such as the current COVID-19 pandemic.
The tool is a centralized system (web application), single multi-user platform, which relies on Artificial Intelligence (AI) algorithms for the processing of heterogeneous data, and which can produce an output level of risk.
The model includes a specific neural network which will be first trained to learn the correlation between selected inputs, related to the case of interest: environmental variables (chemical-physical, such as meteorological), human activity
arXiv Detail & Related papers (2020-11-07T17:50:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.