Reputation Management in the ChatGPT Era
- URL: http://arxiv.org/abs/2412.06356v2
- Date: Tue, 18 Feb 2025 15:23:03 GMT
- Title: Reputation Management in the ChatGPT Era
- Authors: Lilian Edwards, Reuben Binns,
- Abstract summary: Generative AI systems often generate outputs about real people, even when not explicitly prompted to do so.
This paper considers what legal tools currently exist to protect such individuals, with a particular focus on defamation and data protection law.
We conclude by noting the limitations of these individualistic remedies and hint at the need for a more systemic, environmental approach to protecting the infosphere against generative AI.
- Score: 4.485614995478454
- License:
- Abstract: Generative AI systems often generate outputs about real people, even when not explicitly prompted to do so. This can lead to significant reputational and privacy harms, especially when sensitive, misleading, and outright false. This paper considers what legal tools currently exist to protect such individuals, with a particular focus on defamation and data protection law. We explore the potential of libel law, arguing that it is a potential but not an ideal remedy, due to lack of harmonization, and the focus on damages rather than systematic prevention of future libel. We then turn to data protection law, arguing that the data subject rights to erasure and rectification may offer some more meaningful protection, although the technical feasibility of compliance is a matter of ongoing research. We conclude by noting the limitations of these individualistic remedies and hint at the need for a more systemic, environmental approach to protecting the infosphere against generative AI.
Related papers
- Perception of Digital Privacy Protection: An Empirical Study using GDPR Framework [0.22628031081632272]
This study investigates people perception of digital privacy protection of government data using the General Data Protection Perception Regulation ( systems dichotomy) framework.
Findings suggest a dichotomy in perception in protecting people privacy rights.
The right to object by granting and with-drawing consent is perceived as the least protected.
Second, the study shows evidence of a social dilemma in people perception of digital privacy based on their context and culture.
arXiv Detail & Related papers (2024-11-19T04:36:31Z) - Generative AI for Secure and Privacy-Preserving Mobile Crowdsensing [74.58071278710896]
generative AI has attracted much attention from both academic and industrial fields.
Secure and privacy-preserving mobile crowdsensing (SPPMCS) has been widely applied in data collection/ acquirement.
arXiv Detail & Related papers (2024-05-17T04:00:58Z) - U Can't Gen This? A Survey of Intellectual Property Protection Methods for Data in Generative AI [4.627725143147341]
We study the concerns regarding the intellectual property rights of training data.
We focus on the properties of generative models that enable misuse leading to potential IP violations.
arXiv Detail & Related papers (2024-04-22T09:09:21Z) - The Good and The Bad: Exploring Privacy Issues in Retrieval-Augmented
Generation (RAG) [56.67603627046346]
Retrieval-augmented generation (RAG) is a powerful technique to facilitate language model with proprietary and private data.
In this work, we conduct empirical studies with novel attack methods, which demonstrate the vulnerability of RAG systems on leaking the private retrieval database.
arXiv Detail & Related papers (2024-02-23T18:35:15Z) - Copyright Protection in Generative AI: A Technical Perspective [58.84343394349887]
Generative AI has witnessed rapid advancement in recent years, expanding their capabilities to create synthesized content such as text, images, audio, and code.
The high fidelity and authenticity of contents generated by these Deep Generative Models (DGMs) have sparked significant copyright concerns.
This work delves into this issue by providing a comprehensive overview of copyright protection from a technical perspective.
arXiv Detail & Related papers (2024-02-04T04:00:33Z) - Privacy and Copyright Protection in Generative AI: A Lifecycle Perspective [28.968233485060654]
We discuss the multifaceted challenges of privacy and copyright protection within the data lifecycle.
We advocate for integrated approaches that combines technical innovation with ethical foresight.
This work aims to catalyze a broader discussion and inspire concerted efforts towards data privacy and copyright integrity in Generative AI.
arXiv Detail & Related papers (2023-11-30T05:03:08Z) - TeD-SPAD: Temporal Distinctiveness for Self-supervised
Privacy-preservation for video Anomaly Detection [59.04634695294402]
Video anomaly detection (VAD) without human monitoring is a complex computer vision task.
Privacy leakage in VAD allows models to pick up and amplify unnecessary biases related to people's personal information.
We propose TeD-SPAD, a privacy-aware video anomaly detection framework that destroys visual private information in a self-supervised manner.
arXiv Detail & Related papers (2023-08-21T22:42:55Z) - On the Privacy Risks of Algorithmic Recourse [17.33484111779023]
We make the first attempt at investigating if and how an adversary can leverage recourses to infer private information about the underlying model's training data.
Our work establishes unintended privacy leakage as an important risk in the widespread adoption of recourse methods.
arXiv Detail & Related papers (2022-11-10T09:04:24Z) - Distributed Machine Learning and the Semblance of Trust [66.1227776348216]
Federated Learning (FL) allows the data owner to maintain data governance and perform model training locally without having to share their data.
FL and related techniques are often described as privacy-preserving.
We explain why this term is not appropriate and outline the risks associated with over-reliance on protocols that were not designed with formal definitions of privacy in mind.
arXiv Detail & Related papers (2021-12-21T08:44:05Z) - Overcoming Failures of Imagination in AI Infused System Development and
Deployment [71.9309995623067]
NeurIPS 2020 requested that research paper submissions include impact statements on "potential nefarious uses and the consequences of failure"
We argue that frameworks of harms must be context-aware and consider a wider range of potential stakeholders, system affordances, as well as viable proxies for assessing harms in the widest sense.
arXiv Detail & Related papers (2020-11-26T18:09:52Z) - Online publication of court records: circumventing the
privacy-transparency trade-off [0.0]
We argue that current practices are insufficient for coping with massive access to legal data.
We propose a straw man multimodal architecture paving the way to a full-fledged privacy-preserving legal data publishing system.
arXiv Detail & Related papers (2020-07-03T13:58:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.