SERENE: A Collusion Resilient Replication-based Verification Framework
- URL: http://arxiv.org/abs/2404.11410v2
- Date: Thu, 18 Apr 2024 15:30:51 GMT
- Title: SERENE: A Collusion Resilient Replication-based Verification Framework
- Authors: Amir Esmaeili, Abderrahmen Mtibaa,
- Abstract summary: Collusion detection and mitigation solutions often require the use of a trusted third party server or verified tasks.
We propose SERENE, a collusion resilient replication-based verification framework that detects, and mitigates colluding workers.
We implement and compare SERENE's performance to Staab et. al, resulting in an average of 50% and 60% accuracy improvement in detection and mitigation accuracy respectively.
- Score: 0.4297070083645048
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rapid advancement of autonomous driving technology is accompanied by substantial challenges, particularly the reliance on remote task execution without ensuring a reliable and accurate returned results. This reliance on external compute servers, which may be malicious or rogue, represents a major security threat. While researchers have been exploring verifiable computing, and replication-based task verification as a simple, fast, and dependable method to assess the correctness of results. However, colluding malicious workers can easily defeat this method. Existing collusion detection and mitigation solutions often require the use of a trusted third party server or verified tasks which may be hard to guarantee, or solutions that assume the presence of a minority of colluding servers. We propose SERENE, a collusion resilient replication-based verification framework that detects, and mitigates colluding workers. Unlike state-of-the-art solutions, SERENE uses a lightweight detection algorithm that detects collusion based on a single verification task. Mitigation requires a two stage process to group the workers and identifying colluding from honest workers. We implement and compare SERENE's performance to Staab et. al, resulting in an average of 50\% and 60\% accuracy improvement in detection and mitigation accuracy respectively.
Related papers
- Operator: A Protocol for Trustless Delegation Under Uncertainty [0.0]
We propose a protocol that enforces correctness through collateralized claims in a verification game.<n>Tasks are published as intents, and solvers compete to fulfill them.<n>Any challenger can challenge a result by staking against it to trigger the verification process.<n>Incorrect agents are slashed and correct opposition is rewarded, with an escalation path that penalizes erroneous verifiers themselves.
arXiv Detail & Related papers (2025-07-01T10:22:35Z) - Towards Robust Fact-Checking: A Multi-Agent System with Advanced Evidence Retrieval [1.515687944002438]
The rapid spread of misinformation in the digital era poses significant challenges to public discourse.<n>Traditional human-led fact-checking methods, while credible, struggle with the volume and velocity of online content.<n>This paper proposes a novel multi-agent system for automated fact-checking that enhances accuracy, efficiency, and explainability.
arXiv Detail & Related papers (2025-06-22T02:39:27Z) - Search-Based Correction of Reasoning Chains for Language Models [72.61861891295302]
Chain-of-Thought (CoT) reasoning has advanced the capabilities and transparency of language models (LMs)<n>We introduce a new self-correction framework that augments each reasoning step in a CoT with a latent variable indicating its veracity.<n>We also introduce Search Corrector, a discrete search algorithm over-valued veracity assignments.
arXiv Detail & Related papers (2025-05-17T04:16:36Z) - Lie Detector: Unified Backdoor Detection via Cross-Examination Framework [68.45399098884364]
We propose a unified backdoor detection framework in the semi-honest setting.
Our method achieves superior detection performance, improving accuracy by 5.4%, 1.6%, and 11.9% over SoTA baselines.
Notably, it is the first to effectively detect backdoors in multimodal large language models.
arXiv Detail & Related papers (2025-03-21T06:12:06Z) - Towards Copyright Protection for Knowledge Bases of Retrieval-augmented Language Models via Ownership Verification with Reasoning [58.57194301645823]
Large language models (LLMs) are increasingly integrated into real-world applications through retrieval-augmented generation (RAG) mechanisms.
Existing methods that can be generalized as watermarking techniques to protect these knowledge bases typically involve poisoning attacks.
We propose name for harmless' copyright protection of knowledge bases.
arXiv Detail & Related papers (2025-02-10T09:15:56Z) - Unpacking Failure Modes of Generative Policies: Runtime Monitoring of Consistency and Progress [31.952925824381325]
We propose a runtime monitoring framework that splits the detection of failures into two complementary categories.
We use Vision Language Models (VLMs) to detect when the policy confidently and consistently takes actions that do not solve the task.
By unifying temporal consistency detection and VLM runtime monitoring, Sentinel detects 18% more failures than using either of the two detectors alone.
arXiv Detail & Related papers (2024-10-06T22:13:30Z) - Improving LLM Reasoning through Scaling Inference Computation with Collaborative Verification [52.095460362197336]
Large language models (LLMs) struggle with consistent and accurate reasoning.
LLMs are trained primarily on correct solutions, reducing their ability to detect and learn from errors.
We propose a novel collaborative method integrating Chain-of-Thought (CoT) and Program-of-Thought (PoT) solutions for verification.
arXiv Detail & Related papers (2024-10-05T05:21:48Z) - Multi-Agent Imitation Learning: Value is Easy, Regret is Hard [52.31989962031179]
We study a multi-agent imitation learning (MAIL) problem where we take the perspective of a learner attempting to coordinate a group of agents.
Most prior work in MAIL essentially reduces the problem to matching the behavior of the expert within the support of the demonstrations.
While doing so is sufficient to drive the value gap between the learner and the expert to zero under the assumption that agents are non-strategic, it does not guarantee to deviations by strategic agents.
arXiv Detail & Related papers (2024-06-06T16:18:20Z) - Task-Agnostic Detector for Insertion-Based Backdoor Attacks [53.77294614671166]
We introduce TABDet (Task-Agnostic Backdoor Detector), a pioneering task-agnostic method for backdoor detection.
TABDet leverages final layer logits combined with an efficient pooling technique, enabling unified logit representation across three prominent NLP tasks.
TABDet can jointly learn from diverse task-specific models, demonstrating superior detection efficacy over traditional task-specific methods.
arXiv Detail & Related papers (2024-03-25T20:12:02Z) - Malicious Agent Detection for Robust Multi-Agent Collaborative Perception [52.261231738242266]
Multi-agent collaborative (MAC) perception is more vulnerable to adversarial attacks than single-agent perception.
We propose Malicious Agent Detection (MADE), a reactive defense specific to MAC perception.
We conduct comprehensive evaluations on a benchmark 3D dataset V2X-sim and a real-road dataset DAIR-V2X.
arXiv Detail & Related papers (2023-10-18T11:36:42Z) - Kick Bad Guys Out! Conditionally Activated Anomaly Detection in Federated Learning with Zero-Knowledge Proof Verification [22.078088272837068]
Federated Learning (FL) systems are vulnerable to adversarial attacks, such as model poisoning and backdoor attacks.<n>We propose a novel anomaly detection method designed specifically for practical FL scenarios.<n>Our approach employs a two-stage, conditionally activated detection mechanism.
arXiv Detail & Related papers (2023-10-06T07:09:05Z) - Large Language Model-Powered Smart Contract Vulnerability Detection: New
Perspectives [8.524720028421447]
This paper provides a systematic analysis of the opportunities, challenges, and potential solutions of harnessing Large Language Models (LLMs) such as GPT-4.
generating more answers with higher randomness largely boosts the likelihood of producing a correct answer but inevitably leads to a higher number of false positives.
We propose an adversarial framework dubbed GPTLens that breaks the conventional one-stage detection into two synergistic stages $-$ generation and discrimination.
arXiv Detail & Related papers (2023-10-02T12:37:23Z) - ReAct: Temporal Action Detection with Relational Queries [84.76646044604055]
This work aims at advancing temporal action detection (TAD) using an encoder-decoder framework with action queries.
We first propose a relational attention mechanism in the decoder, which guides the attention among queries based on their relations.
Lastly, we propose to predict the localization quality of each action query at inference in order to distinguish high-quality queries.
arXiv Detail & Related papers (2022-07-14T17:46:37Z) - FairMOT: On the Fairness of Detection and Re-Identification in Multiple
Object Tracking [92.48078680697311]
Multi-object tracking (MOT) is an important problem in computer vision.
We present a simple yet effective approach termed as FairMOT based on the anchor-free object detection architecture CenterNet.
The approach achieves high accuracy for both detection and tracking.
arXiv Detail & Related papers (2020-04-04T08:18:00Z) - Towards Probabilistic Verification of Machine Unlearning [30.892906429582904]
We propose a formal framework to study the design of verification mechanisms for data deletion requests.
We show that our approach has minimal effect on the machine learning service's accuracy but provides high confidence verification of unlearning.
arXiv Detail & Related papers (2020-03-09T16:39:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.