A Formal Model of the Economic Impacts of AI Openness Regulation
- URL: http://arxiv.org/abs/2507.14193v1
- Date: Mon, 14 Jul 2025 07:08:31 GMT
- Title: A Formal Model of the Economic Impacts of AI Openness Regulation
- Authors: Tori Qiu, Benjamin Laufer, Jon Kleinberg, Hoda Heidari,
- Abstract summary: This paper models the strategic interactions among the creator of a general-purpose model (the generalist) and the entity that fine-tunes the general-purpose model to a specialized domain or task.<n>We present a stylized model of the regulator's choice of an open-source definition to evaluate which AI openness standards will establish appropriate economic incentives for developers.
- Score: 8.438080379702125
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Regulatory frameworks, such as the EU AI Act, encourage openness of general-purpose AI models by offering legal exemptions for "open-source" models. Despite this legislative attention on openness, the definition of open-source foundation models remains ambiguous. This paper models the strategic interactions among the creator of a general-purpose model (the generalist) and the entity that fine-tunes the general-purpose model to a specialized domain or task (the specialist), in response to regulatory requirements on model openness. We present a stylized model of the regulator's choice of an open-source definition to evaluate which AI openness standards will establish appropriate economic incentives for developers. Our results characterize market equilibria -- specifically, upstream model release decisions and downstream fine-tuning efforts -- under various openness regulations and present a range of effective regulatory penalties and open-source thresholds. Overall, we find the model's baseline performance determines when increasing the regulatory penalty vs. the open-source threshold will significantly alter the generalist's release strategy. Our model provides a theoretical foundation for AI governance decisions around openness and enables evaluation and refinement of practical open-source policies.
Related papers
- ORPR: An OR-Guided Pretrain-then-Reinforce Learning Model for Inventory Management [9.138155308817215]
"Pretrain-then-Reinforce" approach reconciles AI's adaptive perception with Operations Research's structural rigor.<n>We show that a lightweight, domain-informed model can deliver state-of-the-art performance and robust transferability when guided by structured OR logic.
arXiv Detail & Related papers (2025-12-22T03:39:43Z) - Bridging VLMs and Embodied Intelligence with Deliberate Practice Policy Optimization [72.20212909644017]
Deliberate Practice Policy Optimization (DPPO) is a metacognitive Metaloop'' training framework.<n>DPPO alternates between supervised fine-tuning (competence expansion) and reinforcement learning (skill refinement)<n> Empirically, training a vision-language embodied model with DPPO, referred to as Pelican-VL 1.0, yields a 20.3% performance improvement over the base model.<n>We are open-sourcing both the models and code, providing the first systematic framework that alleviates the data and resource bottleneck.
arXiv Detail & Related papers (2025-11-20T17:58:04Z) - Open Shouldn't Mean Exempt: Open-Source Exceptionalism and Generative AI [1.8256490853231881]
The paper critically examines prevalent justifications for "open-source exceptionalism"<n>The conclusion is that open-source developers must be held to the same legal and ethical standards as all other actors in the technological ecosystem.
arXiv Detail & Related papers (2025-10-16T18:21:06Z) - Synergistic Weak-Strong Collaboration by Aligning Preferences [53.47675666475273]
Current Large Language Models (LLMs) excel in general reasoning yet struggle with specialized tasks requiring proprietary or domain-specific knowledge.<n>We propose a collaborative framework that pairs a specialized weak model with a general strong model.<n>We find that the collaboration significantly outperforms each model alone by leveraging complementary strengths.
arXiv Detail & Related papers (2025-04-21T15:57:33Z) - Is Open Source the Future of AI? A Data-Driven Approach [41.94295877935867]
Large Language Models (LLMs) have become central in academia and industry.<n>Key issue is the trustworthiness of proprietary models, with open-sourcing often proposed as a solution.<n>Open-sourcing presents challenges, including potential misuse, financial disincentives, and intellectual property concerns.
arXiv Detail & Related papers (2025-01-27T09:03:49Z) - The Open Source Advantage in Large Language Models (LLMs) [0.0]
Large language models (LLMs) have rapidly advanced natural language processing, driving significant breakthroughs in tasks such as text generation, machine translation, and domain-specific reasoning.<n>The field now faces a critical dilemma in its approach: closed-source models like GPT-4 deliver state-of-the-art performance but restrict accessibility, and external oversight.<n>Open-source frameworks like LLaMA and Mixtral democratize access, foster collaboration, and support diverse applications, achieving competitive results through techniques like instruction tuning and LoRA.
arXiv Detail & Related papers (2024-12-16T17:32:11Z) - OML: A Primitive for Reconciling Open Access with Owner Control in AI Model Distribution [35.68672391812135]
We introduce OML, a primitive that enables a new distribution paradigm for AI models.<n>OML can be freely distributed for local execution while maintaining cryptographically enforced usage authorization.<n>This work opens a new research direction at the intersection of cryptography, machine learning, and mechanism design.
arXiv Detail & Related papers (2024-11-01T18:46:03Z) - PRISM: A Design Framework for Open-Source Foundation Model Safety [0.0]
This paper addresses the question of how open foundation model developers should approach model safety.
We introduce PRISM, a design framework for open-source foundation model safety that emphasizes Private, Robust, Independent Safety measures.
PRISM aims to create a safer open-source ecosystem that maximizes the potential of these powerful technologies while minimizing the risks to individuals and society as a whole.
arXiv Detail & Related papers (2024-06-14T21:26:15Z) - Towards a Framework for Openness in Foundation Models: Proceedings from the Columbia Convening on Openness in Artificial Intelligence [18.130525337375985]
This paper presents a framework for grappling with openness across the AI stack.
It summarizes previous work on this topic, analyzes the various potential reasons to pursue openness.
It outlines how openness varies in different parts of the AI stack, both at the model and at the system level.
arXiv Detail & Related papers (2024-05-17T20:35:39Z) - The Model Openness Framework: Promoting Completeness and Openness for Reproducibility, Transparency, and Usability in Artificial Intelligence [0.0]
We introduce the Model Openness Framework (MOF), a three-tiered ranked classification system that rates machine learning models based on their completeness and openness.
For each MOF class, we specify code, data, and documentation components of the model development lifecycle that must be released and under which open licenses.
In addition, the Model Openness Tool (MOT) provides a user-friendly reference implementation to evaluate the openness and completeness of models against the MOF classification system.
arXiv Detail & Related papers (2024-03-20T17:47:08Z) - On the Societal Impact of Open Foundation Models [93.67389739906561]
We focus on open foundation models, defined here as those with broadly available model weights.
We identify five distinctive properties of open foundation models that lead to both their benefits and risks.
arXiv Detail & Related papers (2024-02-27T16:49:53Z) - Open-Sourcing Highly Capable Foundation Models: An evaluation of risks,
benefits, and alternative methods for pursuing open-source objectives [6.575445633821399]
Recent decisions by leading AI labs to either open-source their models or to restrict access to their models has sparked debate.
This paper offers an examination of the risks and benefits of open-sourcing highly capable foundation models.
arXiv Detail & Related papers (2023-09-29T17:03:45Z) - Dual policy as self-model for planning [71.73710074424511]
We refer to the model used to simulate one's decisions as the agent's self-model.
Inspired by current reinforcement learning approaches and neuroscience, we explore the benefits and limitations of using a distilled policy network as the self-model.
arXiv Detail & Related papers (2023-06-07T13:58:45Z) - When Demonstrations Meet Generative World Models: A Maximum Likelihood
Framework for Offline Inverse Reinforcement Learning [62.00672284480755]
This paper aims to recover the structure of rewards and environment dynamics that underlie observed actions in a fixed, finite set of demonstrations from an expert agent.
Accurate models of expertise in executing a task has applications in safety-sensitive applications such as clinical decision making and autonomous driving.
arXiv Detail & Related papers (2023-02-15T04:14:20Z) - Control as Hybrid Inference [62.997667081978825]
We present an implementation of CHI which naturally mediates the balance between iterative and amortised inference.
We verify the scalability of our algorithm on a continuous control benchmark, demonstrating that it outperforms strong model-free and model-based baselines.
arXiv Detail & Related papers (2020-07-11T19:44:09Z) - Towards Inheritable Models for Open-Set Domain Adaptation [56.930641754944915]
We introduce a practical Domain Adaptation paradigm where a source-trained model is used to facilitate adaptation in the absence of the source dataset in future.
We present an objective way to quantify inheritability to enable the selection of the most suitable source model for a given target domain, even in the absence of the source data.
arXiv Detail & Related papers (2020-04-09T07:16:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.