Foundation Model Transparency Reports
- URL: http://arxiv.org/abs/2402.16268v1
- Date: Mon, 26 Feb 2024 03:09:06 GMT
- Title: Foundation Model Transparency Reports
- Authors: Rishi Bommasani, Kevin Klyman, Shayne Longpre, Betty Xiong, Sayash
Kapoor, Nestor Maslej, Arvind Narayanan, Percy Liang
- Abstract summary: We propose Foundation Model Transparency Reports, drawing upon the transparency reporting practices in social media.
We identify 6 design principles given the successes and shortcomings of social media transparency reporting.
Well-designed transparency reports could reduce compliance costs, in part due to overlapping regulatory requirements across different jurisdictions.
- Score: 61.313836337206894
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Foundation models are critical digital technologies with sweeping societal
impact that necessitates transparency. To codify how foundation model
developers should provide transparency about the development and deployment of
their models, we propose Foundation Model Transparency Reports, drawing upon
the transparency reporting practices in social media. While external
documentation of societal harms prompted social media transparency reports, our
objective is to institutionalize transparency reporting for foundation models
while the industry is still nascent. To design our reports, we identify 6
design principles given the successes and shortcomings of social media
transparency reporting. To further schematize our reports, we draw upon the 100
transparency indicators from the Foundation Model Transparency Index. Given
these indicators, we measure the extent to which they overlap with the
transparency requirements included in six prominent government policies (e.g.,
the EU AI Act, the US Executive Order on Safe, Secure, and Trustworthy AI).
Well-designed transparency reports could reduce compliance costs, in part due
to overlapping regulatory requirements across different jurisdictions. We
encourage foundation model developers to regularly publish transparency
reports, building upon recommendations from the G7 and the White House.
Related papers
- The Foundation Model Transparency Index v1.1: May 2024 [54.78174872757794]
The October 2023 Index assessed 10 major foundation model developers on 100 transparency indicators.
At the time, developers publicly disclosed very limited information with the average score being 37 out of 100.
We find that developers now score 58 out of 100 on average, a 21 point improvement over v1.0.
arXiv Detail & Related papers (2024-07-17T18:03:37Z) - Automated Transparency: A Legal and Empirical Analysis of the Digital Services Act Transparency Database [6.070078201123852]
The Digital Services Act (DSA) was adopted on 1 November 2022 with the ambition to set a global example in terms of accountability and transparency.
The DSA emphasizes the need for online platforms to report on their content moderation decisions (statements of reasons' - SoRs)
SoRs are currently made available in the DSA Transparency Database, launched by the European Commission in September 2023.
This study aims to understand whether the Transparency Database helps the DSA to live up to its transparency promises.
arXiv Detail & Related papers (2024-04-03T17:51:20Z) - The Foundation Model Transparency Index [55.862805799199194]
The Foundation Model Transparency Index specifies 100 indicators that codify transparency for foundation models.
We score developers in relation to their practices for their flagship foundation model.
Overall, the Index establishes the level of transparency today to drive progress on foundation model governance.
arXiv Detail & Related papers (2023-10-19T17:39:02Z) - A design theory for transparency of information privacy practices [0.0]
The rising diffusion of information systems poses an increasingly serious threat to privacy as a social value.
One approach to alleviating this threat is to establish transparency of information privacy practices (TIPP) so that consumers can better understand how their information is processed.
We develop a theoretical foundation (TIPP theory) for transparency artifact designs useful for establishing TIPP from the perspective of privacy as a social value.
arXiv Detail & Related papers (2023-07-05T21:39:38Z) - Users are the North Star for AI Transparency [111.5679109784322]
Despite widespread calls for transparent artificial intelligence systems, the term is too overburdened with disparate meanings to express precise policy aims or to orient concrete lines of research.
Part of why this happens is that a clear ideal of AI transparency goes unsaid in this body of work.
We explicitly name such a north star -- transparency that is user-centered, user-appropriate, and honest.
arXiv Detail & Related papers (2023-03-09T18:53:29Z) - Bridging the Transparency Gap: What Can Explainable AI Learn From the AI
Act? [0.8287206589886881]
European Union has introduced detailed requirements of transparency for AI systems.
There is a fundamental difference between XAI and the Act regarding what transparency is.
By comparing the disparate views of XAI and regulation, we arrive at four axes where practical work could bridge the transparency gap.
arXiv Detail & Related papers (2023-02-21T16:06:48Z) - Trustworthy Transparency by Design [57.67333075002697]
We propose a transparency framework for software design, incorporating research on user trust and experience.
Our framework enables developing software that incorporates transparency in its design.
arXiv Detail & Related papers (2021-03-19T12:34:01Z) - Dimensions of Transparency in NLP Applications [64.16277166331298]
Broader transparency in descriptions of and communication regarding AI systems is widely considered desirable.
Previous work has suggested that a trade-off exists between greater system transparency and user confusion.
arXiv Detail & Related papers (2021-01-02T11:46:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.