The 2025 Foundation Model Transparency Index
- URL: http://arxiv.org/abs/2512.10169v1
- Date: Thu, 11 Dec 2025 00:01:53 GMT
- Title: The 2025 Foundation Model Transparency Index
- Authors: Alexander Wan, Kevin Klyman, Sayash Kapoor, Nestor Maslej, Shayne Longpre, Betty Xiong, Percy Liang, Rishi Bommasani,
- Abstract summary: Foundation model developers are among the world's most important companies.<n>As these companies become increasingly consequential, how do their transparency practices evolve?<n>The 2025 Foundation Model Transparency Index is the third edition of an annual effort to characterize and quantify the transparency of foundation model developers.
- Score: 85.01250666533294
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Foundation model developers are among the world's most important companies. As these companies become increasingly consequential, how do their transparency practices evolve? The 2025 Foundation Model Transparency Index is the third edition of an annual effort to characterize and quantify the transparency of foundation model developers. The 2025 FMTI introduces new indicators related to data acquisition, usage data, and monitoring and evaluates companies like Alibaba, DeepSeek, and xAI for the first time. The 2024 FMTI reported that transparency was improving, but the 2025 FMTI finds this progress has deteriorated: the average score out of 100 fell from 58 in 2024 to 40 in 2025. Companies are most opaque about their training data and training compute as well as the post-deployment usage and impact of their flagship models. In spite of this general trend, IBM stands out as a positive outlier, scoring 95, in contrast to the lowest scorers, xAI and Midjourney, at just 14. The five members of the Frontier Model Forum we score end up in the middle of the Index: we posit that these companies avoid reputational harms from low scores but lack incentives to be transparency leaders. As policymakers around the world increasingly mandate certain types of transparency, this work reveals the current state of transparency for foundation model developers, how it may change given newly enacted policy, and where more aggressive policy interventions are necessary to address critical information deficits.
Related papers
- Economies of Open Intelligence: Tracing Power & Participation in the Model Ecosystem [21.595922367237815]
Hugging Face Model Hub has been the primary global platform for sharing open weight AI models.<n>Our analysis spans 851,000 models, over 200 aggregated attributes per model, and 2.2B downloads.
arXiv Detail & Related papers (2025-11-27T12:50:25Z) - The 2024 Foundation Model Transparency Index [54.78174872757794]
Foundation Model Transparency Index (FMTI) was launched in October 2023 to measure the transparency of leading foundation model developers.<n>FMTI 2023 assessed 10 major foundation model developers on 100 transparency indicators.<n>We conduct a follow-up study after 6 months: we score 14 developers against the same 100 indicators.<n>While in FMTI 2023 we searched for publicly available information, in FMTI 2024 developers submit reports on the 100 transparency indicators.<n>We find that developers now score 58 out of 100 on average, a 21 point improvement over FMTI 2023.
arXiv Detail & Related papers (2024-07-17T18:03:37Z) - Automated Transparency: A Legal and Empirical Analysis of the Digital Services Act Transparency Database [6.070078201123852]
The Digital Services Act (DSA) was adopted on 1 November 2022 with the ambition to set a global example in terms of accountability and transparency.
The DSA emphasizes the need for online platforms to report on their content moderation decisions (statements of reasons' - SoRs)
SoRs are currently made available in the DSA Transparency Database, launched by the European Commission in September 2023.
This study aims to understand whether the Transparency Database helps the DSA to live up to its transparency promises.
arXiv Detail & Related papers (2024-04-03T17:51:20Z) - Foundation Model Transparency Reports [61.313836337206894]
We propose Foundation Model Transparency Reports, drawing upon the transparency reporting practices in social media.
We identify 6 design principles given the successes and shortcomings of social media transparency reporting.
Well-designed transparency reports could reduce compliance costs, in part due to overlapping regulatory requirements across different jurisdictions.
arXiv Detail & Related papers (2024-02-26T03:09:06Z) - The Foundation Model Transparency Index [55.862805799199194]
The Foundation Model Transparency Index specifies 100 indicators that codify transparency for foundation models.
We score developers in relation to their practices for their flagship foundation model.
Overall, the Index establishes the level of transparency today to drive progress on foundation model governance.
arXiv Detail & Related papers (2023-10-19T17:39:02Z) - Rethinking People Analytics With Inverse Transparency by Design [57.67333075002697]
We propose a new design approach for workforce analytics we refer to as inverse transparency by design.
We find that architectural changes are made without inhibiting core functionality.
We conclude that inverse transparency by design is a promising approach to realize accepted and responsible people analytics.
arXiv Detail & Related papers (2023-05-16T21:37:35Z) - Users are the North Star for AI Transparency [111.5679109784322]
Despite widespread calls for transparent artificial intelligence systems, the term is too overburdened with disparate meanings to express precise policy aims or to orient concrete lines of research.
Part of why this happens is that a clear ideal of AI transparency goes unsaid in this body of work.
We explicitly name such a north star -- transparency that is user-centered, user-appropriate, and honest.
arXiv Detail & Related papers (2023-03-09T18:53:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.