The Model Openness Framework: Promoting Completeness and Openness for Reproducibility, Transparency, and Usability in Artificial Intelligence
- URL: http://arxiv.org/abs/2403.13784v3
- Date: Mon, 3 Jun 2024 16:44:31 GMT
- Title: The Model Openness Framework: Promoting Completeness and Openness for Reproducibility, Transparency, and Usability in Artificial Intelligence
- Authors: Matt White, Ibrahim Haddad, Cailean Osborne, Xiao-Yang Liu Yanglet, Ahmed Abdelmonsef, Sachin Varghese,
- Abstract summary: We propose the Model Openness Framework (MOF), a ranked classification system that rates machine learning models based on their completeness and openness.
This framework aims to prevent misrepresentation of models claiming to be open, guide researchers and developers in providing all model components under permissive licenses, and help individuals and organizations identify models that can be safely adopted without restrictions.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative AI (GAI) offers unprecedented opportunities for research and innovation, but its commercialization has raised concerns about transparency, reproducibility, and safety. Many open GAI models lack the necessary components for full understanding and reproducibility, and some use restrictive licenses whilst claiming to be ``open-source''. To address these concerns, we propose the Model Openness Framework (MOF), a ranked classification system that rates machine learning models based on their completeness and openness, following principles of open science, open source, open data, and open access. The MOF requires specific components of the model development lifecycle to be included and released under appropriate open licenses. This framework aims to prevent misrepresentation of models claiming to be open, guide researchers and developers in providing all model components under permissive licenses, and help individuals and organizations identify models that can be safely adopted without restrictions. By promoting transparency and reproducibility, the MOF combats ``openwashing'' practices and establishes completeness and openness as primary criteria alongside the core tenets of responsible AI. Wide adoption of the MOF will foster a more open AI ecosystem, benefiting research, innovation, and adoption of state-of-the-art models.
Related papers
- PRISM: A Design Framework for Open-Source Foundation Model Safety [0.0]
This paper addresses the question of how open foundation model developers should approach model safety.
We introduce PRISM, a design framework for open-source foundation model safety that emphasizes Private, Robust, Independent Safety measures.
PRISM aims to create a safer open-source ecosystem that maximizes the potential of these powerful technologies while minimizing the risks to individuals and society as a whole.
arXiv Detail & Related papers (2024-06-14T21:26:15Z) - Towards a Framework for Openness in Foundation Models: Proceedings from the Columbia Convening on Openness in Artificial Intelligence [18.130525337375985]
This paper presents a framework for grappling with openness across the AI stack.
It summarizes previous work on this topic, analyzes the various potential reasons to pursue openness.
It outlines how openness varies in different parts of the AI stack, both at the model and at the system level.
arXiv Detail & Related papers (2024-05-17T20:35:39Z) - Open-world Machine Learning: A Review and New Outlooks [83.6401132743407]
This paper aims to provide a comprehensive introduction to the emerging open-world machine learning paradigm.
It aims to help researchers build more powerful AI systems in their respective fields, and to promote the development of artificial general intelligence.
arXiv Detail & Related papers (2024-03-04T06:25:26Z) - OLMo: Accelerating the Science of Language Models [165.16277690540363]
Language models (LMs) have become ubiquitous in both NLP research and in commercial product offerings.
As their commercial importance has surged, the most powerful models have become closed off, gated behind proprietary interfaces.
We believe it is essential for the research community to have access to powerful, truly open LMs.
We have built OLMo, a competitive, truly Open Language Model, to enable the scientific study of language models.
arXiv Detail & Related papers (2024-02-01T18:28:55Z) - Forging Vision Foundation Models for Autonomous Driving: Challenges,
Methodologies, and Opportunities [59.02391344178202]
Vision foundation models (VFMs) serve as potent building blocks for a wide range of AI applications.
The scarcity of comprehensive training data, the need for multi-sensor integration, and the diverse task-specific architectures pose significant obstacles to the development of VFMs.
This paper delves into the critical challenge of forging VFMs tailored specifically for autonomous driving, while also outlining future directions.
arXiv Detail & Related papers (2024-01-16T01:57:24Z) - NovaCOMET: Open Commonsense Foundation Models with Symbolic Knowledge
Distillation [82.85412355714898]
We present NovaCOMET, an open commonsense knowledge model, that combines the best aspects of knowledge and general task models.
Compared to previous knowledge models, NovaCOMET allows open-format relations enabling direct application to reasoning tasks.
It explicitly centers knowledge, enabling superior performance for commonsense reasoning.
arXiv Detail & Related papers (2023-12-10T19:45:24Z) - Open-Sourcing Highly Capable Foundation Models: An evaluation of risks,
benefits, and alternative methods for pursuing open-source objectives [6.575445633821399]
Recent decisions by leading AI labs to either open-source their models or to restrict access to their models has sparked debate.
This paper offers an examination of the risks and benefits of open-sourcing highly capable foundation models.
arXiv Detail & Related papers (2023-09-29T17:03:45Z) - Self-Destructing Models: Increasing the Costs of Harmful Dual Uses of
Foundation Models [103.71308117592963]
We present an algorithm for training self-destructing models leveraging techniques from meta-learning and adversarial learning.
In a small-scale experiment, we show MLAC can largely prevent a BERT-style model from being re-purposed to perform gender identification.
arXiv Detail & Related papers (2022-11-27T21:43:45Z) - OpenFed: A Comprehensive and Versatile Open-Source Federated Learning
Framework [5.893286029670115]
We propose OpenFed, an open-source software framework for end-to-end Federated Learning.
For researchers, OpenFed provides a framework wherein new methods can be easily implemented and fairly evaluated.
For downstream users, OpenFed allows Federated Learning to be plugged and play within different subject-matter contexts.
arXiv Detail & Related papers (2021-09-16T10:31:59Z) - Towards Inheritable Models for Open-Set Domain Adaptation [56.930641754944915]
We introduce a practical Domain Adaptation paradigm where a source-trained model is used to facilitate adaptation in the absence of the source dataset in future.
We present an objective way to quantify inheritability to enable the selection of the most suitable source model for a given target domain, even in the absence of the source data.
arXiv Detail & Related papers (2020-04-09T07:16:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.