Auditing Recommender Systems -- Putting the DSA into practice with a
risk-scenario-based approach
- URL: http://arxiv.org/abs/2302.04556v1
- Date: Thu, 9 Feb 2023 10:48:37 GMT
- Title: Auditing Recommender Systems -- Putting the DSA into practice with a
risk-scenario-based approach
- Authors: Anna-Katharina Me{\ss}mer, Martin Degeling
- Abstract summary: European Union's Digital Services Act requires platforms to make algorithmic systems more transparent and follow due diligence obligations.
These requirements constitute an important legislative step towards mitigating the systemic risks posed by online platforms.
But the DSA lacks concrete guidelines to operationalise a viable audit process.
This void could foster the spread of 'audit-washing', that is, platforms exploiting audits to legitimise their practices and neglect responsibility.
- Score: 5.875955066693127
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Today's online platforms rely heavily on recommendation systems to serve
content to their users; social media is a prime example. In turn,
recommendation systems largely depend on artificial intelligence algorithms to
decide who gets to see what. While the content social media platforms deliver
is as varied as the users who engage with them, it has been shown that
platforms can contribute to serious harm to individuals, groups and societies.
Studies have suggested that these negative impacts range from worsening an
individual's mental health to driving society-wide polarisation capable of
putting democracies at risk. To better safeguard people from these harms, the
European Union's Digital Services Act (DSA) requires platforms, especially
those with large numbers of users, to make their algorithmic systems more
transparent and follow due diligence obligations. These requirements constitute
an important legislative step towards mitigating the systemic risks posed by
online platforms. However, the DSA lacks concrete guidelines to operationalise
a viable audit process that would allow auditors to hold these platforms
accountable. This void could foster the spread of 'audit-washing', that is,
platforms exploiting audits to legitimise their practices and neglect
responsibility.
To fill this gap, we propose a risk-scenario-based audit process. We explain
in detail what audits and assessments of recommender systems according to the
DSA should look like. Our approach also considers the evolving nature of
platforms and emphasises the observability of their recommender systems'
components. The resulting audit facilitates internal (among audits of the same
system at different moments in time) and external comparability (among audits
of different platforms) while also affording the evaluation of mitigation
measures implemented by the platforms themselves.
Related papers
- From Transparency to Accountability and Back: A Discussion of Access and Evidence in AI Auditing [1.196505602609637]
Audits can take many forms, including pre-deployment risk assessments, ongoing monitoring, and compliance testing.
There are many operational challenges to AI auditing that complicate its implementation.
We argue that auditing can be cast as a natural hypothesis test, draw parallels hypothesis testing and legal procedure, and argue that this framing provides clear and interpretable guidance on audit implementation.
arXiv Detail & Related papers (2024-10-07T06:15:46Z) - A Unified Causal Framework for Auditing Recommender Systems for Ethical Concerns [40.793466500324904]
We view recommender system auditing from a causal lens and provide a general recipe for defining auditing metrics.
Under this general causal auditing framework, we categorize existing auditing metrics and identify gaps in them.
We propose two classes of such metrics:future- and past-reacheability and stability, that measure the ability of a user to influence their own and other users' recommendations.
arXiv Detail & Related papers (2024-09-20T04:37:36Z) - System-2 Recommenders: Disentangling Utility and Engagement in Recommendation Systems via Temporal Point-Processes [80.97898201876592]
We propose a generative model in which past content interactions impact the arrival rates of users based on a self-exciting Hawkes process.
We show analytically that given samples it is possible to disentangle System-1 and System-2 and allow content optimization based on user utility.
arXiv Detail & Related papers (2024-05-29T18:19:37Z) - User-Controllable Recommendation via Counterfactual Retrospective and
Prospective Explanations [96.45414741693119]
We present a user-controllable recommender system that seamlessly integrates explainability and controllability.
By providing both retrospective and prospective explanations through counterfactual reasoning, users can customize their control over the system.
arXiv Detail & Related papers (2023-08-02T01:13:36Z) - A Survey on Fairness-aware Recommender Systems [59.23208133653637]
We present concepts of fairness in different recommendation scenarios, comprehensively categorize current advances, and introduce typical methods to promote fairness in different stages of recommender systems.
Next, we delve into the significant influence that fairness-aware recommender systems exert on real-world industrial applications.
arXiv Detail & Related papers (2023-06-01T07:08:22Z) - A User-Driven Framework for Regulating and Auditing Social Media [94.70018274127231]
We propose that algorithmic filtering should be regulated with respect to a flexible, user-driven baseline.
We require that the feeds a platform filters contain "similar" informational content as their respective baseline feeds.
We present an auditing procedure that checks whether a platform honors this requirement.
arXiv Detail & Related papers (2023-04-20T17:53:34Z) - Mysterious and Manipulative Black Boxes: A Qualitative Analysis of Perceptions on Recommender Systems [0.2538209532048867]
This paper presents a qualitative analysis of the perceptions of ordinary citizens, civil society groups, businesses, and others on recommender systems in Europe.
The dataset examined is based on the answers submitted to a consultation about the Digital Services Act (DSA) recently enacted in the European Union (EU)
According to the qualitative results, Europeans have generally negative opinions about recommender systems and the quality of their recommendations.
arXiv Detail & Related papers (2023-02-20T11:57:12Z) - A Liquid Democracy System for Human-Computer Societies [0.0]
We present design and implementation of a reputation system supporting "liquid democracy" principle.
The system is based on "weighted liquid rank" algorithm employing different sorts of explicit and implicit ratings being exchanged by members of the society.
The system is evaluated against live social network data with help of simulation modelling for an online marketplace case.
arXiv Detail & Related papers (2022-10-05T15:57:49Z) - Having your Privacy Cake and Eating it Too: Platform-supported Auditing
of Social Media Algorithms for Public Interest [70.02478301291264]
Social media platforms curate access to information and opportunities, and so play a critical role in shaping public discourse.
Prior studies have used black-box methods to show that these algorithms can lead to biased or discriminatory outcomes.
We propose a new method for platform-supported auditing that can meet the goals of the proposed legislation.
arXiv Detail & Related papers (2022-07-18T17:32:35Z) - Overcoming Failures of Imagination in AI Infused System Development and
Deployment [71.9309995623067]
NeurIPS 2020 requested that research paper submissions include impact statements on "potential nefarious uses and the consequences of failure"
We argue that frameworks of harms must be context-aware and consider a wider range of potential stakeholders, system affordances, as well as viable proxies for assessing harms in the widest sense.
arXiv Detail & Related papers (2020-11-26T18:09:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.