On the relevance of APIs facing fairwashed audits
- URL: http://arxiv.org/abs/2305.13883v1
- Date: Tue, 23 May 2023 10:06:22 GMT
- Title: On the relevance of APIs facing fairwashed audits
- Authors: Jade Garcia Bourr\'ee, Erwan Le Merrer, Gilles Tredan and Beno\^it
Rottembourg
- Abstract summary: Recent legislation required AI platforms to provide APIs for regulators to assess their compliance with the law.
Research has shown that platforms can manipulate their API answers through fairwashing.
This paper studies the benefits of the joint use of platform scraping and of APIs.
- Score: 3.479254848034425
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent legislation required AI platforms to provide APIs for regulators to
assess their compliance with the law. Research has nevertheless shown that
platforms can manipulate their API answers through fairwashing. Facing this
threat for reliable auditing, this paper studies the benefits of the joint use
of platform scraping and of APIs. In this setup, we elaborate on the use of
scraping to detect manipulated answers: since fairwashing only manipulates API
answers, exploiting scraps may reveal a manipulation. To abstract the wide
range of specific API-scrap situations, we introduce a notion of proxy that
captures the consistency an auditor might expect between both data sources. If
the regulator has a good proxy of the consistency, then she can easily detect
manipulation and even bypass the API to conduct her audit. On the other hand,
without a good proxy, relying on the API is necessary, and the auditor cannot
defend against fairwashing.
We then simulate practical scenarios in which the auditor may mostly rely on
the API to conveniently conduct the audit task, while maintaining her chances
to detect a potential manipulation. To highlight the tension between the audit
task and the API fairwashing detection task, we identify Pareto-optimal
strategies in a practical audit scenario.
We believe this research sets the stage for reliable audits in practical and
manipulation-prone setups.
Related papers
- Mining REST APIs for Potential Mass Assignment Vulnerabilities [1.0377683220196872]
We propose a lightweight approach to mine the REST API specifications and identify operations and attributes that are prone to mass assignment.
We conducted a preliminary study on 100 APIs and found 25 prone to this vulnerability.
We confirmed nine real vulnerable operations in six APIs.
arXiv Detail & Related papers (2024-05-02T09:19:32Z) - Trustless Audits without Revealing Data or Models [49.23322187919369]
We show that it is possible to allow model providers to keep their model weights (but not architecture) and data secret while allowing other parties to trustlessly audit model and data properties.
We do this by designing a protocol called ZkAudit in which model providers publish cryptographic commitments of datasets and model weights.
arXiv Detail & Related papers (2024-04-06T04:43:06Z) - Under manipulations, are some AI models harder to audit? [2.699900017799093]
We study the feasibility of robust audits in realistic settings, in which models exhibit large capacities.
We first prove a constraining result: if a web platform uses models that may fit any data, no audit strategy can outperform random sampling.
We then relate the manipulability of audits to the capacity of the targeted models, using the Rademacher complexity.
arXiv Detail & Related papers (2024-02-14T09:38:09Z) - On the Detection of Reviewer-Author Collusion Rings From Paper Bidding [71.43634536456844]
Collusion rings pose a major threat to the peer-review systems of computer science conferences.
One approach to solve this problem would be to detect the colluding reviewers from their manipulated bids.
No research has yet established that detecting collusion rings is even possible.
arXiv Detail & Related papers (2024-02-12T18:12:09Z) - Exploring Behaviours of RESTful APIs in an Industrial Setting [0.43012765978447565]
We propose a set of behavioural properties, common to REST APIs, which are used to generate examples of behaviours that these APIs exhibit.
These examples can be used both (i) to further the understanding of the API and (ii) as a source of automatic test cases.
Our approach can generate examples deemed relevant for understanding the system and for a source of test generation by practitioners.
arXiv Detail & Related papers (2023-10-26T11:33:11Z) - Exploring API Behaviours Through Generated Examples [0.768721532845575]
We present an approach to automatically generate relevant examples of behaviours of an API.
Our method can produce small and relevant examples that can help engineers to understand the system under exploration.
arXiv Detail & Related papers (2023-08-29T11:05:52Z) - Tight Auditing of Differentially Private Machine Learning [77.38590306275877]
For private machine learning, existing auditing mechanisms are tight.
They only give tight estimates under implausible worst-case assumptions.
We design an improved auditing scheme that yields tight privacy estimates for natural (not adversarially crafted) datasets.
arXiv Detail & Related papers (2023-02-15T21:40:33Z) - REaaS: Enabling Adversarially Robust Downstream Classifiers via Robust
Encoder as a Service [67.0982378001551]
We show how a service provider pre-trains an encoder and then deploys it as a cloud service API.
A client queries the cloud service API to obtain feature vectors for its training/testing inputs.
We show that the cloud service only needs to provide two APIs to enable a client to certify the robustness of its downstream classifier.
arXiv Detail & Related papers (2023-01-07T17:40:11Z) - Having your Privacy Cake and Eating it Too: Platform-supported Auditing
of Social Media Algorithms for Public Interest [70.02478301291264]
Social media platforms curate access to information and opportunities, and so play a critical role in shaping public discourse.
Prior studies have used black-box methods to show that these algorithms can lead to biased or discriminatory outcomes.
We propose a new method for platform-supported auditing that can meet the goals of the proposed legislation.
arXiv Detail & Related papers (2022-07-18T17:32:35Z) - Algorithmic audits of algorithms, and the law [3.9103337761169943]
We focus on external audits that are conducted by interacting with the user side of the target algorithm.
The legal framework in which these audits take place is mostly ambiguous to researchers developing them.
This article highlights the relation of current audits with law, in order to structure the growing field of algorithm auditing.
arXiv Detail & Related papers (2022-02-15T14:20:53Z) - Auditing AI models for Verified Deployment under Semantic Specifications [65.12401653917838]
AuditAI bridges the gap between interpretable formal verification and scalability.
We show how AuditAI allows us to obtain controlled variations for verification and certified training while addressing the limitations of verifying using only pixel-space perturbations.
arXiv Detail & Related papers (2021-09-25T22:53:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.