Having your Privacy Cake and Eating it Too: Platform-supported Auditing
of Social Media Algorithms for Public Interest
- URL: http://arxiv.org/abs/2207.08773v2
- Date: Wed, 15 Feb 2023 06:15:19 GMT
- Title: Having your Privacy Cake and Eating it Too: Platform-supported Auditing
of Social Media Algorithms for Public Interest
- Authors: Basileal Imana, Aleksandra Korolova, John Heidemann
- Abstract summary: Social media platforms curate access to information and opportunities, and so play a critical role in shaping public discourse.
Prior studies have used black-box methods to show that these algorithms can lead to biased or discriminatory outcomes.
We propose a new method for platform-supported auditing that can meet the goals of the proposed legislation.
- Score: 70.02478301291264
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Social media platforms curate access to information and opportunities, and so
play a critical role in shaping public discourse today. The opaque nature of
the algorithms these platforms use to curate content raises societal questions.
Prior studies have used black-box methods to show that these algorithms can
lead to biased or discriminatory outcomes. However, existing auditing methods
face fundamental limitations because they function independent of the
platforms. Concerns of potential harm have prompted proposal of legislation in
both the U.S. and the E.U. to mandate a new form of auditing where vetted
external researchers get privileged access to social media platforms.
Unfortunately, to date there have been no concrete technical proposals to
provide such auditing, because auditing at scale risks disclosure of users'
private data and platforms' proprietary algorithms. We propose a new method for
platform-supported auditing that can meet the goals of the proposed
legislation. Our first contribution is to enumerate the challenges of existing
auditing methods to implement these policies at scale. Second, we suggest that
limited, privileged access to relevance estimators is the key to enabling
generalizable platform-supported auditing by external researchers. Third, we
show platform-supported auditing need not risk user privacy nor disclosure of
platforms' business interests by proposing an auditing framework that protects
against these risks. For a particular fairness metric, we show that ensuring
privacy imposes only a small constant factor increase (6.34x as an upper bound,
and 4x for typical parameters) in the number of samples required for accurate
auditing. Our technical contributions, combined with ongoing legal and policy
efforts, can enable public oversight into how social media platforms affect
individuals and society by moving past the privacy-vs-transparency hurdle.
Related papers
- Auditing for Bias in Ad Delivery Using Inferred Demographic Attributes [50.37313459134418]
We study the effects of inference error on auditing for bias in one prominent application: black-box audit of ad delivery using paired ads.
We propose a way to mitigate the inference error when evaluating skew in ad delivery algorithms.
arXiv Detail & Related papers (2024-10-30T18:57:03Z) - Privacy Risks of General-Purpose AI Systems: A Foundation for Investigating Practitioner Perspectives [47.17703009473386]
Powerful AI models have led to impressive leaps in performance across a wide range of tasks.
Privacy concerns have led to a wealth of literature covering various privacy risks and vulnerabilities of AI models.
We conduct a systematic review of these survey papers to provide a concise and usable overview of privacy risks in GPAIS.
arXiv Detail & Related papers (2024-07-02T07:49:48Z) - Auditing for Racial Discrimination in the Delivery of Education Ads [50.37313459134418]
We propose a new third-party auditing method that can evaluate racial bias in the delivery of ads for education opportunities.
We find evidence of racial discrimination in Meta's algorithmic delivery of ads for education opportunities, posing legal and ethical concerns.
arXiv Detail & Related papers (2024-06-02T02:00:55Z) - A User-Driven Framework for Regulating and Auditing Social Media [94.70018274127231]
We propose that algorithmic filtering should be regulated with respect to a flexible, user-driven baseline.
We require that the feeds a platform filters contain "similar" informational content as their respective baseline feeds.
We present an auditing procedure that checks whether a platform honors this requirement.
arXiv Detail & Related papers (2023-04-20T17:53:34Z) - Tight Auditing of Differentially Private Machine Learning [77.38590306275877]
For private machine learning, existing auditing mechanisms are tight.
They only give tight estimates under implausible worst-case assumptions.
We design an improved auditing scheme that yields tight privacy estimates for natural (not adversarially crafted) datasets.
arXiv Detail & Related papers (2023-02-15T21:40:33Z) - Auditing Recommender Systems -- Putting the DSA into practice with a
risk-scenario-based approach [5.875955066693127]
European Union's Digital Services Act requires platforms to make algorithmic systems more transparent and follow due diligence obligations.
These requirements constitute an important legislative step towards mitigating the systemic risks posed by online platforms.
But the DSA lacks concrete guidelines to operationalise a viable audit process.
This void could foster the spread of 'audit-washing', that is, platforms exploiting audits to legitimise their practices and neglect responsibility.
arXiv Detail & Related papers (2023-02-09T10:48:37Z) - User-Centered Security in Natural Language Processing [0.7106986689736825]
dissertation proposes a framework of user-centered security in Natural Language Processing (NLP)
It focuses on two security domains within NLP with great public interest.
arXiv Detail & Related papers (2023-01-10T22:34:19Z) - Online publication of court records: circumventing the
privacy-transparency trade-off [0.0]
We argue that current practices are insufficient for coping with massive access to legal data.
We propose a straw man multimodal architecture paving the way to a full-fledged privacy-preserving legal data publishing system.
arXiv Detail & Related papers (2020-07-03T13:58:01Z) - Regulating algorithmic filtering on social media [14.873907857806357]
Social media platforms have the ability to influence users' perceptions and decisions, from their dining choices to their voting preferences.
Many calling for regulations on filtering algorithms, but designing and enforcing regulations remains challenging.
We find that there are conditions under which the regulation does not place a high performance cost on the platform.
arXiv Detail & Related papers (2020-06-17T04:14:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.