Privacy-preserving and reward-based mechanisms of proof of engagement
- URL: http://arxiv.org/abs/2506.12523v1
- Date: Sat, 14 Jun 2025 14:33:39 GMT
- Title: Privacy-preserving and reward-based mechanisms of proof of engagement
- Authors: Matteo Marco Montanari, Alessandro Aldini,
- Abstract summary: This work explores different solutions, including DLTs as well as established technologies based on centralized systems.<n>The main aspects we consider include the level of privacy guaranteed to users, the scope of PoA/PoE (both temporal and spatial), the transferability of the proof, and the integration with incentive mechanisms.
- Score: 49.1574468325115
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Proof-of-Attendance (PoA) mechanisms are typically employed to demonstrate a specific user's participation in an event, whether virtual or in-person. The goal of this study is to extend such mechanisms to broader contexts where the user wishes to digitally demonstrate her involvement in a specific activity (Proof-of-Engagement, PoE). This work explores different solutions, including DLTs as well as established technologies based on centralized systems. The main aspects we consider include the level of privacy guaranteed to users, the scope of PoA/PoE (both temporal and spatial), the transferability of the proof, and the integration with incentive mechanisms.
Related papers
- Differential Privacy in Machine Learning: From Symbolic AI to LLMs [49.1574468325115]
Differential privacy provides a formal framework to mitigate privacy risks.<n>It ensures that the inclusion or exclusion of any single data point does not significantly alter the output of an algorithm.
arXiv Detail & Related papers (2025-06-13T11:30:35Z) - Personhood Credentials: Human-Centered Design Recommendation Balancing Security, Usability, and Trust [2.3020018305241337]
Building on related concepts, like, decentralized identifiers (DIDs), proof of personhood, anonymous credentials, personhood credentials (PHCs) emerged as an alternative approach.<n>Despite their growing importance, limited research has been done on users perceptions and preferences regarding PHCs.<n>We conducted competitive analysis, and semi-structured online user interviews with 23 participants from US and EU to provide concrete design recommendations.
arXiv Detail & Related papers (2025-02-22T22:33:00Z) - Distributed Identity for Zero Trust and Segmented Access Control: A Novel Approach to Securing Network Infrastructure [4.169915659794567]
This study assesses security improvements achieved when distributed identity is employed with ZTA principle.<n>The study suggests adopting distributed identities can enhance overall security postures by an order of magnitude.<n>The research recommends refining technical standards, expanding the use of distributed identity in practice, and its applications for the contemporary digital security landscape.
arXiv Detail & Related papers (2025-01-14T00:02:02Z) - Collaborative Inference over Wireless Channels with Feature Differential Privacy [57.68286389879283]
Collaborative inference among multiple wireless edge devices has the potential to significantly enhance Artificial Intelligence (AI) applications.
transmitting extracted features poses a significant privacy risk, as sensitive personal data can be exposed during the process.
We propose a novel privacy-preserving collaborative inference mechanism, wherein each edge device in the network secures the privacy of extracted features before transmitting them to a central server for inference.
arXiv Detail & Related papers (2024-10-25T18:11:02Z) - Over-the-Air Collaborative Inference with Feature Differential Privacy [8.099700053397278]
Collaborative inference can enhance Artificial Intelligence (AI) applications, including autonomous driving, personal identification, and activity classification.
The transmission of extracted features entails the potential risk of exposing sensitive personal data.
New privacy-protecting collaborative inference mechanism is developed.
arXiv Detail & Related papers (2024-06-01T01:39:44Z) - Evaluating Google's Protected Audience Protocol [7.737740676767729]
Google has proposed the Privacy Sandbox initiative to enable ad targeting without third-party cookies.
This work focuses on analyzing linkage privacy risks for the reporting mechanisms proposed in the Protected Audience proposal.
arXiv Detail & Related papers (2024-05-13T18:28:56Z) - Unified Mechanism-Specific Amplification by Subsampling and Group Privacy Amplification [54.1447806347273]
Amplification by subsampling is one of the main primitives in machine learning with differential privacy.
We propose the first general framework for deriving mechanism-specific guarantees.
We analyze how subsampling affects the privacy of groups of multiple users.
arXiv Detail & Related papers (2024-03-07T19:36:05Z) - Realistic simulation of users for IT systems in cyber ranges [63.20765930558542]
We instrument each machine by means of an external agent to generate user activity.
This agent combines both deterministic and deep learning based methods to adapt to different environment.
We also propose conditional text generation models to facilitate the creation of conversations and documents.
arXiv Detail & Related papers (2021-11-23T10:53:29Z) - Trustworthy AI Inference Systems: An Industry Research View [58.000323504158054]
We provide an industry research view for approaching the design, deployment, and operation of trustworthy AI inference systems.
We highlight opportunities and challenges in AI systems using trusted execution environments.
We outline areas of further development that require the global collective attention of industry, academia, and government researchers.
arXiv Detail & Related papers (2020-08-10T23:05:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.