Moderating Model Marketplaces: Platform Governance Puzzles for AI Intermediaries
- URL: http://arxiv.org/abs/2311.12573v3
- Date: Wed, 11 Sep 2024 16:52:44 GMT
- Title: Moderating Model Marketplaces: Platform Governance Puzzles for AI Intermediaries
- Authors: Robert Gorwa, Michael Veale,
- Abstract summary: Hosting intermediaries such as Hugging Face provide easy access to user-uploaded models and training data.
These model marketplaces lower technical deployment barriers for hundreds of thousands of users, yet can be used in numerous potentially harmful and illegal ways.
- Score: 1.5346678870160886
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The AI development community is increasingly making use of hosting intermediaries such as Hugging Face provide easy access to user-uploaded models and training data. These model marketplaces lower technical deployment barriers for hundreds of thousands of users, yet can be used in numerous potentially harmful and illegal ways. In this article, we explain ways in which AI systems, which can both `contain' content and be open-ended tools, present one of the trickiest platform governance challenges seen to date. We provide case studies of several incidents across three illustrative platforms -- Hugging Face, GitHub and Civitai -- to examine how model marketplaces moderate models. Building on this analysis, we outline important (and yet nevertheless limited) practices that industry has been developing to respond to moderation demands: licensing, access and use restrictions, automated content moderation, and open policy development. While the policy challenge at hand is a considerable one, we conclude with some ideas as to how platforms could better mobilize resources to act as a careful, fair, and proportionate regulatory access point.
Related papers
- Governing AI Agents [0.2913760942403036]
Article looks at the economic theory of principal-agent problems and the common law doctrine of agency relationships.
It identifies problems arising from AI agents, including issues of information asymmetry, discretionary authority, and loyalty.
It argues that new technical and legal infrastructure is needed to support governance principles of inclusivity, visibility, and liability.
arXiv Detail & Related papers (2025-01-14T07:55:18Z) - Fundamental Risks in the Current Deployment of General-Purpose AI Models: What Have We (Not) Learnt From Cybersecurity? [60.629883024152576]
Large Language Models (LLMs) have seen rapid deployment in a wide range of use cases.
OpenAIs Altera are just a few examples of increased autonomy, data access, and execution capabilities.
These methods come with a range of cybersecurity challenges.
arXiv Detail & Related papers (2024-12-19T14:44:41Z) - Protocol Learning, Decentralized Frontier Risk and the No-Off Problem [56.74434512241989]
We identify a third paradigm - Protocol Learning - where models are trained across decentralized networks of incentivized participants.
This approach has the potential to aggregate orders of magnitude more computational resources than any single centralized entity.
It also introduces novel challenges: heterogeneous and unreliable nodes, malicious participants, the need for unextractable models to preserve incentives, and complex governance dynamics.
arXiv Detail & Related papers (2024-12-10T19:53:50Z) - Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy, Research, and Practice [186.055899073629]
Unlearning is often invoked as a solution for removing the effects of targeted information from a generative-AI model.
Unlearning is also proposed as a way to prevent a model from generating targeted types of information in its outputs.
Both of these goals--the targeted removal of information from a model and the targeted suppression of information from a model's outputs--present various technical and substantive challenges.
arXiv Detail & Related papers (2024-12-09T20:18:43Z) - A Novel Access Control and Privacy-Enhancing Approach for Models in Edge Computing [0.26107298043931193]
We propose a novel model access control method tailored for edge computing environments.
This method leverages image style as a licensing mechanism, embedding style recognition into the model's operational framework.
By restricting the input data to the edge model, this approach not only prevents attackers from gaining unauthorized access to the model but also enhances the privacy of data on terminal devices.
arXiv Detail & Related papers (2024-11-06T11:37:30Z) - KModels: Unlocking AI for Business Applications [10.833754921830154]
This paper presents the architecture of KModels and the key decisions that shape it.
KModels enables AI consumers to eliminate the need for a dedicated data scientist.
It is highly suited for on-premise deployment but can also be used in cloud environments.
arXiv Detail & Related papers (2024-09-08T13:19:12Z) - Knowledge-Aware Parsimony Learning: A Perspective from Relational Graphs [47.6830995661091]
We develop next-generation models in a parsimonious manner, achieving greater potential with simpler models.
The key is to drive models using domain-specific knowledge, such as symbols, logic, and formulas, instead of relying on the scaling law.
This approach allows us to build a framework that uses this knowledge as "building blocks" to achieve parsimony in model design, training, and interpretation.
arXiv Detail & Related papers (2024-06-29T15:52:37Z) - On the Challenges and Opportunities in Generative AI [135.2754367149689]
We argue that current large-scale generative AI models do not sufficiently address several fundamental issues that hinder their widespread adoption across domains.
In this work, we aim to identify key unresolved challenges in modern generative AI paradigms that should be tackled to further enhance their capabilities, versatility, and reliability.
arXiv Detail & Related papers (2024-02-28T15:19:33Z) - Foundation Models for Decision Making: Problems, Methods, and
Opportunities [124.79381732197649]
Foundation models pretrained on diverse data at scale have demonstrated extraordinary capabilities in a wide range of vision and language tasks.
New paradigms are emerging for training foundation models to interact with other agents and perform long-term reasoning.
Research at the intersection of foundation models and decision making holds tremendous promise for creating powerful new systems.
arXiv Detail & Related papers (2023-03-07T18:44:07Z) - Envisioning a Human-AI collaborative system to transform policies into
decision models [7.9231719294492065]
We explore the enormous potential of AI to assist government agencies and policy experts in scaling the production of both human-readable and machine executable policy rules.
We present an initial emerging approach to shorten the route from policy documents to executable, interpretable and standardised decision models using AI, NLP and Knowledge Graphs.
Despite the many open domain challenges, in this position paper we explore the enormous potential of AI to assist government agencies and policy experts in scaling the production of both human-readable and machine executable policy rules.
arXiv Detail & Related papers (2022-11-01T18:29:48Z) - DIME: Fine-grained Interpretations of Multimodal Models via Disentangled
Local Explanations [119.1953397679783]
We focus on advancing the state-of-the-art in interpreting multimodal models.
Our proposed approach, DIME, enables accurate and fine-grained analysis of multimodal models.
arXiv Detail & Related papers (2022-03-03T20:52:47Z) - INTERN: A New Learning Paradigm Towards General Vision [117.3343347061931]
We develop a new learning paradigm named INTERN.
By learning with supervisory signals from multiple sources in multiple stages, the model being trained will develop strong generalizability.
In most cases, our models, adapted with only 10% of the training data in the target domain, outperform the counterparts trained with the full set of data.
arXiv Detail & Related papers (2021-11-16T18:42:50Z) - AI in Smart Cities: Challenges and approaches to enable road vehicle
automation and smart traffic control [56.73750387509709]
SCC ideates on a data-centered society aiming at improving efficiency by automating and optimizing activities and utilities.
This paper describes AI perspectives in SCC and gives an overview of AI-based technologies used in traffic to enable road vehicle automation and smart traffic control.
arXiv Detail & Related papers (2021-04-07T14:31:08Z) - An Automatic Attribute Based Access Control Policy Extraction from
Access Logs [5.142415132534397]
An attribute-based access control (ABAC) model provides a more flexible approach for addressing the authorization needs of complex and dynamic systems.
We present a methodology for automatically learning ABAC policy rules from access logs of a system to simplify the policy development process.
arXiv Detail & Related papers (2020-03-16T15:08:54Z) - Marketplace for AI Models [20.986472832797777]
We sketch guidelines for a new AI diffusion method based on a decentralized online marketplace.
We consider the technical, economic, and regulatory aspects of such a marketplace.
We find that most of these marketplaces are centralized commercial marketplaces with relatively few models.
arXiv Detail & Related papers (2020-03-03T15:27:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.