On the relevance of APIs facing fairwashed audits
- URL: http://arxiv.org/abs/2305.13883v1
- Date: Tue, 23 May 2023 10:06:22 GMT
- Title: On the relevance of APIs facing fairwashed audits
- Authors: Jade Garcia Bourr\'ee, Erwan Le Merrer, Gilles Tredan and Beno\^it
Rottembourg
- Abstract summary: Recent legislation required AI platforms to provide APIs for regulators to assess their compliance with the law.
Research has shown that platforms can manipulate their API answers through fairwashing.
This paper studies the benefits of the joint use of platform scraping and of APIs.
- Score: 3.479254848034425
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent legislation required AI platforms to provide APIs for regulators to
assess their compliance with the law. Research has nevertheless shown that
platforms can manipulate their API answers through fairwashing. Facing this
threat for reliable auditing, this paper studies the benefits of the joint use
of platform scraping and of APIs. In this setup, we elaborate on the use of
scraping to detect manipulated answers: since fairwashing only manipulates API
answers, exploiting scraps may reveal a manipulation. To abstract the wide
range of specific API-scrap situations, we introduce a notion of proxy that
captures the consistency an auditor might expect between both data sources. If
the regulator has a good proxy of the consistency, then she can easily detect
manipulation and even bypass the API to conduct her audit. On the other hand,
without a good proxy, relying on the API is necessary, and the auditor cannot
defend against fairwashing.
We then simulate practical scenarios in which the auditor may mostly rely on
the API to conveniently conduct the audit task, while maintaining her chances
to detect a potential manipulation. To highlight the tension between the audit
task and the API fairwashing detection task, we identify Pareto-optimal
strategies in a practical audit scenario.
We believe this research sets the stage for reliable audits in practical and
manipulation-prone setups.
Related papers
- Fine Grained Insider Risk Detection [0.0]
We present a method to detect departures from business-justified among support agents.
We apply our method to help audit millions of actions of over three thousand support agents.
arXiv Detail & Related papers (2024-11-04T22:07:38Z) - FIRE: Fact-checking with Iterative Retrieval and Verification [63.67320352038525]
FIRE is a novel framework that integrates evidence retrieval and claim verification in an iterative manner.
It achieves slightly better performance while reducing large language model (LLM) costs by an average of 7.6 times and search costs by 16.5 times.
These results indicate that FIRE holds promise for application in large-scale fact-checking operations.
arXiv Detail & Related papers (2024-10-17T06:44:18Z) - DeepREST: Automated Test Case Generation for REST APIs Exploiting Deep Reinforcement Learning [5.756036843502232]
This paper introduces DeepREST, a novel black-box approach for automatically testing REST APIs.
It leverages deep reinforcement learning to uncover implicit API constraints, that is, constraints hidden from API documentation.
Our empirical validation suggests that the proposed approach is very effective in achieving high test coverage and fault detection.
arXiv Detail & Related papers (2024-08-16T08:03:55Z) - Trustless Audits without Revealing Data or Models [49.23322187919369]
We show that it is possible to allow model providers to keep their model weights (but not architecture) and data secret while allowing other parties to trustlessly audit model and data properties.
We do this by designing a protocol called ZkAudit in which model providers publish cryptographic commitments of datasets and model weights.
arXiv Detail & Related papers (2024-04-06T04:43:06Z) - Under manipulations, are some AI models harder to audit? [2.699900017799093]
We study the feasibility of robust audits in realistic settings, in which models exhibit large capacities.
We first prove a constraining result: if a web platform uses models that may fit any data, no audit strategy can outperform random sampling.
We then relate the manipulability of audits to the capacity of the targeted models, using the Rademacher complexity.
arXiv Detail & Related papers (2024-02-14T09:38:09Z) - On the Detection of Reviewer-Author Collusion Rings From Paper Bidding [71.43634536456844]
Collusion rings pose a major threat to the peer-review systems of computer science conferences.
One approach to solve this problem would be to detect the colluding reviewers from their manipulated bids.
No research has yet established that detecting collusion rings is even possible.
arXiv Detail & Related papers (2024-02-12T18:12:09Z) - Exploring API Behaviours Through Generated Examples [0.768721532845575]
We present an approach to automatically generate relevant examples of behaviours of an API.
Our method can produce small and relevant examples that can help engineers to understand the system under exploration.
arXiv Detail & Related papers (2023-08-29T11:05:52Z) - Tight Auditing of Differentially Private Machine Learning [77.38590306275877]
For private machine learning, existing auditing mechanisms are tight.
They only give tight estimates under implausible worst-case assumptions.
We design an improved auditing scheme that yields tight privacy estimates for natural (not adversarially crafted) datasets.
arXiv Detail & Related papers (2023-02-15T21:40:33Z) - REaaS: Enabling Adversarially Robust Downstream Classifiers via Robust
Encoder as a Service [67.0982378001551]
We show how a service provider pre-trains an encoder and then deploys it as a cloud service API.
A client queries the cloud service API to obtain feature vectors for its training/testing inputs.
We show that the cloud service only needs to provide two APIs to enable a client to certify the robustness of its downstream classifier.
arXiv Detail & Related papers (2023-01-07T17:40:11Z) - Having your Privacy Cake and Eating it Too: Platform-supported Auditing
of Social Media Algorithms for Public Interest [70.02478301291264]
Social media platforms curate access to information and opportunities, and so play a critical role in shaping public discourse.
Prior studies have used black-box methods to show that these algorithms can lead to biased or discriminatory outcomes.
We propose a new method for platform-supported auditing that can meet the goals of the proposed legislation.
arXiv Detail & Related papers (2022-07-18T17:32:35Z) - Algorithmic audits of algorithms, and the law [3.9103337761169943]
We focus on external audits that are conducted by interacting with the user side of the target algorithm.
The legal framework in which these audits take place is mostly ambiguous to researchers developing them.
This article highlights the relation of current audits with law, in order to structure the growing field of algorithm auditing.
arXiv Detail & Related papers (2022-02-15T14:20:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.