Personhood credentials: Artificial intelligence and the value of privacy-preserving tools to distinguish who is real online
- URL: http://arxiv.org/abs/2408.07892v3
- Date: Mon, 26 Aug 2024 19:02:34 GMT
- Title: Personhood credentials: Artificial intelligence and the value of privacy-preserving tools to distinguish who is real online
- Authors: Steven Adler, Zoë Hitzig, Shrey Jain, Catherine Brewer, Wayne Chang, Renée DiResta, Eddy Lazzarin, Sean McGregor, Wendy Seltzer, Divya Siddarth, Nouran Soliman, Tobin South, Connor Spelliscy, Manu Sporny, Varya Srivastava, John Bailey, Brian Christian, Andrew Critch, Ronnie Falcon, Heather Flanagan, Kim Hamilton Duffy, Eric Ho, Claire R. Leibowicz, Srikanth Nadhamuni, Alan Z. Rozenshtein, David Schnurr, Evan Shapiro, Lacey Strahm, Andrew Trask, Zoe Weinberg, Cedric Whitney, Tom Zick,
- Abstract summary: Malicious actors have long used misleading identities to conduct fraud, spread disinformation, and carry out other deceptive schemes.
With the advent of increasingly capable AI, bad actors can amplify the potential scale and effectiveness of their operations.
We analyze the value of a new tool to address this challenge: "personhood credentials" (PHCs)
PHCs empower users to demonstrate that they are real people -- not AIs -- to online services, without disclosing any personal information.
- Score: 5.365346373228897
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Anonymity is an important principle online. However, malicious actors have long used misleading identities to conduct fraud, spread disinformation, and carry out other deceptive schemes. With the advent of increasingly capable AI, bad actors can amplify the potential scale and effectiveness of their operations, intensifying the challenge of balancing anonymity and trustworthiness online. In this paper, we analyze the value of a new tool to address this challenge: "personhood credentials" (PHCs), digital credentials that empower users to demonstrate that they are real people -- not AIs -- to online services, without disclosing any personal information. Such credentials can be issued by a range of trusted institutions -- governments or otherwise. A PHC system, according to our definition, could be local or global, and does not need to be biometrics-based. Two trends in AI contribute to the urgency of the challenge: AI's increasing indistinguishability from people online (i.e., lifelike content and avatars, agentic activity), and AI's increasing scalability (i.e., cost-effectiveness, accessibility). Drawing on a long history of research into anonymous credentials and "proof-of-personhood" systems, personhood credentials give people a way to signal their trustworthiness on online platforms, and offer service providers new tools for reducing misuse by bad actors. In contrast, existing countermeasures to automated deception -- such as CAPTCHAs -- are inadequate against sophisticated AI, while stringent identity verification solutions are insufficiently private for many use-cases. After surveying the benefits of personhood credentials, we also examine deployment risks and design challenges. We conclude with actionable next steps for policymakers, technologists, and standards bodies to consider in consultation with the public.
Related papers
- MisinfoEval: Generative AI in the Era of "Alternative Facts" [50.069577397751175]
We introduce a framework for generating and evaluating large language model (LLM) based misinformation interventions.
We present (1) an experiment with a simulated social media environment to measure effectiveness of misinformation interventions, and (2) a second experiment with personalized explanations tailored to the demographics and beliefs of users.
Our findings confirm that LLM-based interventions are highly effective at correcting user behavior.
arXiv Detail & Related papers (2024-10-13T18:16:50Z) - Privacy-preserving Optics for Enhancing Protection in Face De-identification [60.110274007388135]
We propose a hardware-level face de-identification method to solve this vulnerability.
We also propose an anonymization framework that generates a new face using the privacy-preserving image, face heatmap, and a reference face image from a public dataset as input.
arXiv Detail & Related papers (2024-03-31T19:28:04Z) - AI and Democracy's Digital Identity Crisis [0.0]
Privacy-preserving identity attestations can drastically reduce instances of impersonation and make disinformation easy to identify and potentially hinder.
In this paper, we discuss attestation types, including governmental, biometric, federated, and web of trust-based.
We believe these systems could be the best approach to authenticating identity and protecting against some of the threats to democracy that AI can pose in the hands of malicious actors.
arXiv Detail & Related papers (2023-09-25T14:15:18Z) - TeD-SPAD: Temporal Distinctiveness for Self-supervised
Privacy-preservation for video Anomaly Detection [59.04634695294402]
Video anomaly detection (VAD) without human monitoring is a complex computer vision task.
Privacy leakage in VAD allows models to pick up and amplify unnecessary biases related to people's personal information.
We propose TeD-SPAD, a privacy-aware video anomaly detection framework that destroys visual private information in a self-supervised manner.
arXiv Detail & Related papers (2023-08-21T22:42:55Z) - Detecting The Corruption Of Online Questionnaires By Artificial
Intelligence [1.9458156037869137]
This study tested if text generated by an AI for the purpose of an online study can be detected by both humans and automatic AI detection systems.
Humans were able to correctly identify authorship of text above chance level.
But their performance was still below what would be required to ensure satisfactory data quality.
arXiv Detail & Related papers (2023-08-14T23:47:56Z) - Protecting User Privacy in Online Settings via Supervised Learning [69.38374877559423]
We design an intelligent approach to online privacy protection that leverages supervised learning.
By detecting and blocking data collection that might infringe on a user's privacy, we can restore a degree of digital privacy to the user.
arXiv Detail & Related papers (2023-04-06T05:20:16Z) - Identity and Personhood in Digital Democracy: Evaluating Inclusion,
Equality, Security, and Privacy in Pseudonym Parties and Other Proofs of
Personhood [1.3833241949666322]
ID checking, biometrics, self-sovereign identity, and trust networks all present flaws.
These flaws may be insurmountable because digital identity is a cart pulling the horse.
We explore alternative approaches to "proof of personhood" that may provide this missing foundation.
arXiv Detail & Related papers (2020-11-04T17:08:54Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z) - Persuasion Meets AI: Ethical Considerations for the Design of Social
Engineering Countermeasures [0.0]
Privacy in Social Network Sites (SNSs) like Facebook or Instagram is closely related to people's self-disclosure decisions.
Online privacy decisions are often based on spurious risk judgements that make people liable to reveal sensitive data to untrusted recipients.
This paper elaborates on the ethical challenges that nudging mechanisms can introduce to the development of AI-based countermeasures.
arXiv Detail & Related papers (2020-09-27T14:24:29Z) - CIAGAN: Conditional Identity Anonymization Generative Adversarial
Networks [12.20367903755194]
CIAGAN is a model for image and video anonymization based on conditional generative adversarial networks.
Our model is able to remove the identifying characteristics of faces and bodies while producing high-quality images and videos.
arXiv Detail & Related papers (2020-05-19T15:56:08Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.