The 2024 Foundation Model Transparency Index
- URL: http://arxiv.org/abs/2407.12929v2
- Date: Tue, 04 Mar 2025 21:07:55 GMT
- Title: The 2024 Foundation Model Transparency Index
- Authors: Rishi Bommasani, Kevin Klyman, Sayash Kapoor, Shayne Longpre, Betty Xiong, Nestor Maslej, Percy Liang,
- Abstract summary: Foundation Model Transparency Index (FMTI) was launched in October 2023 to measure the transparency of leading foundation model developers.<n>FMTI 2023 assessed 10 major foundation model developers on 100 transparency indicators.<n>We conduct a follow-up study after 6 months: we score 14 developers against the same 100 indicators.<n>While in FMTI 2023 we searched for publicly available information, in FMTI 2024 developers submit reports on the 100 transparency indicators.<n>We find that developers now score 58 out of 100 on average, a 21 point improvement over FMTI 2023.
- Score: 54.78174872757794
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Foundation models are increasingly consequential yet extremely opaque. To characterize the status quo, the Foundation Model Transparency Index (FMTI) was launched in October 2023 to measure the transparency of leading foundation model developers. FMTI 2023 assessed 10 major foundation model developers (e.g. OpenAI, Google) on 100 transparency indicators (e.g. does the developer disclose the wages it pays for data labor?). At the time, developers publicly disclosed very limited information with the average score being 37 out of 100. To understand how the status quo has changed, we conduct a follow-up study after 6 months: we score 14 developers against the same 100 indicators. While in FMTI 2023 we searched for publicly available information, in FMTI 2024 developers submit reports on the 100 transparency indicators, potentially including information that was not previously public. We find that developers now score 58 out of 100 on average, a 21 point improvement over FMTI 2023. Much of this increase is driven by developers disclosing information during the FMTI 2024 process: on average, developers disclosed information related to 16.6 indicators that was not previously public. We observe regions of sustained (i.e. across 2023 and 2024) and systemic (i.e. across most or all developers) opacity such as on copyright status, data access, data labor, and downstream impact. We publish transparency reports for each developer that consolidate information disclosures: these reports are based on the information disclosed to us via developers. Our findings demonstrate that transparency can be improved in this nascent ecosystem, the Foundation Model Transparency Index likely contributes to these improvements, and policymakers should consider interventions in areas where transparency has not improved.
Related papers
- The 2025 Foundation Model Transparency Index [85.01250666533294]
Foundation model developers are among the world's most important companies.<n>As these companies become increasingly consequential, how do their transparency practices evolve?<n>The 2025 Foundation Model Transparency Index is the third edition of an annual effort to characterize and quantify the transparency of foundation model developers.
arXiv Detail & Related papers (2025-12-11T00:01:53Z) - The AI Attribution Paradox: Transparency as Social Strategy in Open-Source Software Development [0.0]
We analyze 14,300 GitHub commits across 7,393 repositories from 2023-2025.<n>We investigated attribution strategies and community responses across eight major AI tools.<n>We find developers strategically balance acknowledging AI assistance with managing community scrutiny.
arXiv Detail & Related papers (2025-11-30T12:30:55Z) - Seeking and Updating with Live Visual Knowledge [75.25025869244837]
We introduce LiveVQA, the first-of-its-kind dataset featuring 107,143 samples and 12 categories data.<n>LiveVQA enables evaluation of how models handle latest visual information beyond their knowledge boundaries.<n>Our comprehensive benchmarking of 17 state-of-the-art MLLMs reveals significant performance gaps on content beyond knowledge cutoff.
arXiv Detail & Related papers (2025-04-07T17:39:31Z) - AI data transparency: an exploration through the lens of AI incidents [2.255682336735152]
This research explores the status of public documentation about data practices within AI systems generating public concern.
We highlight a need to develop systematic ways of monitoring AI data transparency that account for the diversity of AI system types.
arXiv Detail & Related papers (2024-09-05T07:23:30Z) - Automated Transparency: A Legal and Empirical Analysis of the Digital Services Act Transparency Database [6.070078201123852]
The Digital Services Act (DSA) was adopted on 1 November 2022 with the ambition to set a global example in terms of accountability and transparency.
The DSA emphasizes the need for online platforms to report on their content moderation decisions (statements of reasons' - SoRs)
SoRs are currently made available in the DSA Transparency Database, launched by the European Commission in September 2023.
This study aims to understand whether the Transparency Database helps the DSA to live up to its transparency promises.
arXiv Detail & Related papers (2024-04-03T17:51:20Z) - Foundation Model Transparency Reports [61.313836337206894]
We propose Foundation Model Transparency Reports, drawing upon the transparency reporting practices in social media.
We identify 6 design principles given the successes and shortcomings of social media transparency reporting.
Well-designed transparency reports could reduce compliance costs, in part due to overlapping regulatory requirements across different jurisdictions.
arXiv Detail & Related papers (2024-02-26T03:09:06Z) - OWSM v3.1: Better and Faster Open Whisper-Style Speech Models based on E-Branchformer [67.75820725013372]
The Open Whisper-style Speech Model (OWSM) is an initial step towards reproducing OpenAI Whisper using public data and open-source toolkits.
We present a series of E-Branchformer-based models named OWSM v3.1, ranging from 100M to 1B parameters.
OWSM v3.1 outperforms its predecessor, OWSM v3, in most evaluation benchmarks, while showing an improved inference speed of up to 25%.
arXiv Detail & Related papers (2024-01-30T01:22:18Z) - The Foundation Model Transparency Index [55.862805799199194]
The Foundation Model Transparency Index specifies 100 indicators that codify transparency for foundation models.
We score developers in relation to their practices for their flagship foundation model.
Overall, the Index establishes the level of transparency today to drive progress on foundation model governance.
arXiv Detail & Related papers (2023-10-19T17:39:02Z) - Rethinking People Analytics With Inverse Transparency by Design [57.67333075002697]
We propose a new design approach for workforce analytics we refer to as inverse transparency by design.
We find that architectural changes are made without inhibiting core functionality.
We conclude that inverse transparency by design is a promising approach to realize accepted and responsible people analytics.
arXiv Detail & Related papers (2023-05-16T21:37:35Z) - Trustworthy Transparency by Design [57.67333075002697]
We propose a transparency framework for software design, incorporating research on user trust and experience.
Our framework enables developing software that incorporates transparency in its design.
arXiv Detail & Related papers (2021-03-19T12:34:01Z) - Out-distribution aware Self-training in an Open World Setting [62.19882458285749]
We leverage unlabeled data in an open world setting to further improve prediction performance.
We introduce out-distribution aware self-training, which includes a careful sample selection strategy.
Our classifiers are by design out-distribution aware and can thus distinguish task-related inputs from unrelated ones.
arXiv Detail & Related papers (2020-12-21T12:25:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.