Incentive Mechanism for Uncertain Tasks under Differential Privacy
- URL: http://arxiv.org/abs/2305.16793v2
- Date: Wed, 6 Mar 2024 15:16:51 GMT
- Title: Incentive Mechanism for Uncertain Tasks under Differential Privacy
- Authors: Xikun Jiang, Chenhao Ying, Lei Li, Boris Düdder, Haiqin Wu, Haiming Jin, Yuan Luo,
- Abstract summary: Mobile crowd sensing (MCS) has emerged as an increasingly popular sensing paradigm due to its cost-effectiveness.
This paper presents HERALD*, an incentive mechanism that addresses issues through the use of uncertainty and hidden bids.
- Score: 17.058734221792964
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Mobile crowd sensing (MCS) has emerged as an increasingly popular sensing paradigm due to its cost-effectiveness. This approach relies on platforms to outsource tasks to participating workers when prompted by task publishers. Although incentive mechanisms have been devised to foster widespread participation in MCS, most of them focus only on static tasks (i.e., tasks for which the timing and type are known in advance) and do not protect the privacy of worker bids. In a dynamic and resource-constrained environment, tasks are often uncertain (i.e., the platform lacks a priori knowledge about the tasks) and worker bids may be vulnerable to inference attacks. This paper presents HERALD*, an incentive mechanism that addresses these issues through the use of uncertainty and hidden bids. Theoretical analysis reveals that HERALD* satisfies a range of critical criteria, including truthfulness, individual rationality, differential privacy, low computational complexity, and low social cost. These properties are then corroborated through a series of evaluations.
Related papers
- Understanding and Mitigating Risks of Generative AI in Financial Services [22.673239064487667]
We aim to highlight AI content safety considerations specific to the financial services domain and outline an associated AI content risk taxonomy.
We evaluate how existing open-source technical guardrail solutions cover this taxonomy by assessing them on data collected via red-teaming activities.
arXiv Detail & Related papers (2025-04-25T16:55:51Z) - Agentic Knowledgeable Self-awareness [79.25908923383776]
KnowSelf is a data-centric approach that applies agents with knowledgeable self-awareness like humans.
Our experiments demonstrate that KnowSelf can outperform various strong baselines on different tasks and models with minimal use of external knowledge.
arXiv Detail & Related papers (2025-04-04T16:03:38Z) - Steering No-Regret Agents in MFGs under Model Uncertainty [19.845081182511713]
We study the design of steering rewards in Mean-Field Games with density-independent transitions.
We establish sub-linear regret guarantees for the cumulative gaps between the agents' behaviors and the desired ones.
Our work presents an effective framework for steering agents behaviors in large-population systems under uncertainty.
arXiv Detail & Related papers (2025-03-12T12:02:02Z) - A Cooperative Multi-Agent Framework for Zero-Shot Named Entity Recognition [71.61103962200666]
Zero-shot named entity recognition (NER) aims to develop entity recognition systems from unannotated text corpora.
Recent work has adapted large language models (LLMs) for zero-shot NER by crafting specialized prompt templates.
We introduce the cooperative multi-agent system (CMAS), a novel framework for zero-shot NER.
arXiv Detail & Related papers (2025-02-25T23:30:43Z) - Stealthy Multi-Task Adversarial Attacks [17.24457318044218]
We investigate selectively targeting one task while preserving performance in others within a multi-task framework.
This approach is motivated by varying security priorities among tasks in real-world applications, such as autonomous driving.
We propose a method for the stealthy multi-task attack framework that utilizes multiple algorithms to inject imperceptible noise into the input.
arXiv Detail & Related papers (2024-11-26T23:18:32Z) - Criticality and Safety Margins for Reinforcement Learning [53.10194953873209]
We seek to define a criticality framework with both a quantifiable ground truth and a clear significance to users.
We introduce true criticality as the expected drop in reward when an agent deviates from its policy for n consecutive random actions.
We also introduce the concept of proxy criticality, a low-overhead metric that has a statistically monotonic relationship to true criticality.
arXiv Detail & Related papers (2024-09-26T21:00:45Z) - Privacy Risks of General-Purpose AI Systems: A Foundation for Investigating Practitioner Perspectives [47.17703009473386]
Powerful AI models have led to impressive leaps in performance across a wide range of tasks.
Privacy concerns have led to a wealth of literature covering various privacy risks and vulnerabilities of AI models.
We conduct a systematic review of these survey papers to provide a concise and usable overview of privacy risks in GPAIS.
arXiv Detail & Related papers (2024-07-02T07:49:48Z) - WESE: Weak Exploration to Strong Exploitation for LLM Agents [95.6720931773781]
This paper proposes a novel approach, Weak Exploration to Strong Exploitation (WESE) to enhance LLM agents in solving open-world interactive tasks.
WESE involves decoupling the exploration and exploitation process, employing a cost-effective weak agent to perform exploration tasks for global knowledge.
A knowledge graph-based strategy is then introduced to store the acquired knowledge and extract task-relevant knowledge, enhancing the stronger agent in success rate and efficiency for the exploitation task.
arXiv Detail & Related papers (2024-04-11T03:31:54Z) - Incentivizing Massive Unknown Workers for Budget-Limited Crowdsensing:
From Off-Line and On-Line Perspectives [31.24314338983544]
We propose an off-line Context-Aware CMAB-based Incentive (CACI) mechanism.
We also extend the idea to the on-line setting where unknown workers may join in or depart from the systems.
arXiv Detail & Related papers (2023-09-21T14:30:42Z) - "How to make them stay?" -- Diverse Counterfactual Explanations of
Employee Attrition [3.0839245814393728]
Employee attrition is an important and complex problem that can directly affect an organisation's competitiveness and performance.
Machine learning (ML) has been applied in various aspects of human resource management.
This paper proposes the use of counterfactual explanations focusing on multiple attrition cases from historical data.
arXiv Detail & Related papers (2023-03-08T13:54:57Z) - Skill-Based Reinforcement Learning with Intrinsic Reward Matching [77.34726150561087]
We present Intrinsic Reward Matching (IRM), which unifies task-agnostic skill pretraining and task-aware finetuning.
IRM enables us to utilize pretrained skills far more effectively than previous skill selection methods.
arXiv Detail & Related papers (2022-10-14T00:04:49Z) - AutoDIME: Automatic Design of Interesting Multi-Agent Environments [3.1546318469750205]
We examine a set of intrinsic teacher rewards derived from prediction problems that can be applied in multi-agent settings.
Of the intrinsic rewards considered we found value disagreement to be most consistent across tasks.
Our results suggest that intrinsic teacher rewards, and in particular value disagreement, are a promising approach for automating both single and multi-agent environment design.
arXiv Detail & Related papers (2022-03-04T18:25:33Z) - Harnessing Context for Budget-Limited Crowdsensing with Massive
Uncertain Workers [26.835745787064337]
We propose a Context-Aware Worker Selection (CAWS) algorithm in this paper.
CAWS aims at maximizing the expected total sensing revenue efficiently with both budget constraint and capacity constraints respected.
arXiv Detail & Related papers (2021-07-03T09:09:07Z) - Explanations of Machine Learning predictions: a mandatory step for its
application to Operational Processes [61.20223338508952]
Credit Risk Modelling plays a paramount role.
Recent machine and deep learning techniques have been applied to the task.
We suggest to use LIME technique to tackle the explainability problem in this field.
arXiv Detail & Related papers (2020-12-30T10:27:59Z) - Differential Privacy of Hierarchical Census Data: An Optimization
Approach [53.29035917495491]
Census Bureaus are interested in releasing aggregate socio-economic data about a large population without revealing sensitive information about any individual.
Recent events have identified some of the privacy challenges faced by these organizations.
This paper presents a novel differential-privacy mechanism for releasing hierarchical counts of individuals.
arXiv Detail & Related papers (2020-06-28T18:19:55Z) - Maximizing Information Gain in Partially Observable Environments via
Prediction Reward [64.24528565312463]
This paper tackles the challenge of using belief-based rewards for a deep RL agent.
We derive the exact error between negative entropy and the expected prediction reward.
This insight provides theoretical motivation for several fields using prediction rewards.
arXiv Detail & Related papers (2020-05-11T08:13:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.