Opening the Scope of Openness in AI
- URL: http://arxiv.org/abs/2505.06464v1
- Date: Fri, 09 May 2025 23:16:44 GMT
- Title: Opening the Scope of Openness in AI
- Authors: Tamara Paris, AJung Moon, Jin Guo,
- Abstract summary: The concept of openness in AI has so far been heavily inspired by the definition and community practice of open source software.<n>We argue that considering the fundamental scope of openness in different disciplines will broaden discussions.<n>Our work contributes to the recent efforts in framing openness in AI by reflecting principles and practices of openness beyond open source software.
- Score: 1.2894076331861155
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The concept of openness in AI has so far been heavily inspired by the definition and community practice of open source software. This positions openness in AI as having positive connotations; it introduces assumptions of certain advantages, such as collaborative innovation and transparency. However, the practices and benefits of open source software are not fully transferable to AI, which has its own challenges. Framing a notion of openness tailored to AI is crucial to addressing its growing societal implications, risks, and capabilities. We argue that considering the fundamental scope of openness in different disciplines will broaden discussions, introduce important perspectives, and reflect on what openness in AI should mean. Toward this goal, we qualitatively analyze 98 concepts of openness discovered from topic modeling, through which we develop a taxonomy of openness. Using this taxonomy as an instrument, we situate the current discussion on AI openness, identify gaps and highlight links with other disciplines. Our work contributes to the recent efforts in framing openness in AI by reflecting principles and practices of openness beyond open source software and calls for a more holistic view of openness in terms of actions, system properties, and ethical objectives.
Related papers
- A Community-driven vision for a new Knowledge Resource for AI [59.29703403953085]
Despite the success of knowledge resources like WordNet, verifiable, general-purpose widely available sources of knowledge remain a critical deficiency in AI infrastructure.<n>This paper synthesizes our findings and outlines a community-driven vision for a new knowledge infrastructure.
arXiv Detail & Related papers (2025-06-19T20:51:28Z) - Toward a Public and Secure Generative AI: A Comparative Analysis of Open and Closed LLMs [0.0]
This study aims to critically evaluate and compare the characteristics, opportunities, and challenges of open and closed generative AI models.<n>The proposed framework outlines key dimensions, openness, public governance, and security, as essential pillars for shaping the future of trustworthy and inclusive Gen AI.
arXiv Detail & Related papers (2025-05-15T15:21:09Z) - Comprehensive Analysis of Transparency and Accessibility of ChatGPT, DeepSeek, And other SoTA Large Language Models [2.6900047294457683]
Despite increasing discussions on open-source Artificial Intelligence (AI), existing research lacks a discussion on the transparency and accessibility of state-of-the-art (SoTA) Large Language Models (LLMs)<n>This study critically analyzes SoTA LLMs from the last five years, including ChatGPT, DeepSeek, LLaMA, and others, to assess their adherence to transparency standards and the implications of partial openness.<n>Our findings reveal that while some models are labeled as open-source, this does not necessarily mean they are fully open-sourced.
arXiv Detail & Related papers (2025-02-21T23:53:13Z) - Safety is Essential for Responsible Open-Ended Systems [47.172735322186]
Open-Endedness is the ability of AI systems to continuously and autonomously generate novel and diverse artifacts or solutions.<n>This position paper argues that the inherently dynamic and self-propagating nature of Open-Ended AI introduces significant, underexplored risks.
arXiv Detail & Related papers (2025-02-06T21:32:07Z) - Open Problems in Mechanistic Interpretability [61.44773053835185]
Mechanistic interpretability aims to understand the computational mechanisms underlying neural networks' capabilities.<n>Despite recent progress toward these goals, there are many open problems in the field that require solutions.
arXiv Detail & Related papers (2025-01-27T20:57:18Z) - Towards a Framework for Openness in Foundation Models: Proceedings from the Columbia Convening on Openness in Artificial Intelligence [18.130525337375985]
This paper presents a framework for grappling with openness across the AI stack.
It summarizes previous work on this topic, analyzes the various potential reasons to pursue openness.
It outlines how openness varies in different parts of the AI stack, both at the model and at the system level.
arXiv Detail & Related papers (2024-05-17T20:35:39Z) - Risks and Opportunities of Open-Source Generative AI [64.86989162783648]
Applications of Generative AI (Gen AI) are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about the potential risks of the technology, and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source generative AI.
arXiv Detail & Related papers (2024-05-14T13:37:36Z) - Near to Mid-term Risks and Opportunities of Open-Source Generative AI [94.06233419171016]
Applications of Generative AI are expected to revolutionize a number of different areas, ranging from science & medicine to education.
The potential for these seismic changes has triggered a lively debate about potential risks and resulted in calls for tighter regulation.
This regulation is likely to put at risk the budding field of open-source Generative AI.
arXiv Detail & Related papers (2024-04-25T21:14:24Z) - The Model Openness Framework: Promoting Completeness and Openness for Reproducibility, Transparency, and Usability in Artificial Intelligence [0.0]
We introduce the Model Openness Framework (MOF), a three-tiered ranked classification system that rates machine learning models based on their completeness and openness.
For each MOF class, we specify code, data, and documentation components of the model development lifecycle that must be released and under which open licenses.
In addition, the Model Openness Tool (MOT) provides a user-friendly reference implementation to evaluate the openness and completeness of models against the MOF classification system.
arXiv Detail & Related papers (2024-03-20T17:47:08Z) - A Survey on Neural Open Information Extraction: Current Status and
Future Directions [87.30702606041407]
Open Information Extraction (OpenIE) facilitates domain-independent discovery of relational facts from large corpora.
We provide an overview of the-state-of-the-art neural OpenIE models, their key design decisions, strengths and weakness.
arXiv Detail & Related papers (2022-05-24T02:24:55Z) - Open Questions in Creating Safe Open-ended AI: Tensions Between Control
and Creativity [15.60659580411643]
open-ended evolution and artificial life have much to contribute towards the understanding of open-ended AI.
This paper argues that open-ended AI has its own safety challenges, whether the creativity of open-ended systems can be productively and predictably controlled.
arXiv Detail & Related papers (2020-06-12T22:28:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.