Zero-Knowledge Proof of Traffic: A Deterministic and Privacy-Preserving Cross Verification Mechanism for Cooperative Perception Data
- URL: http://arxiv.org/abs/2312.07948v1
- Date: Wed, 13 Dec 2023 07:53:06 GMT
- Title: Zero-Knowledge Proof of Traffic: A Deterministic and Privacy-Preserving Cross Verification Mechanism for Cooperative Perception Data
- Authors: Ye Tao, Ehsan Javanmardi, Pengfei Lin, Jin Nakazato, Yuze Jiang, Manabu Tsukada, Hiroshi Esaki,
- Abstract summary: This study proposes a novel approach called zero-knowledge Proof of Traffic (zk-PoT)
Multiple independent proofs regarding the same vehicle can be deterministically cross-verified by any receivers without relying on ground truth, probabilistic, or plausibility evaluations.
The proposed approach can be incorporated into existing works, including the European Telecommunications Standards Institute (ETSI) and the International Organization for Standardization (ISO) ITS standards.
- Score: 7.919995199824006
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Cooperative perception is crucial for connected automated vehicles in intelligent transportation systems (ITSs); however, ensuring the authenticity of perception data remains a challenge as the vehicles cannot verify events that they do not witness independently. Various studies have been conducted on establishing the authenticity of data, such as trust-based statistical methods and plausibility-based methods. However, these methods are limited as they require prior knowledge such as previous sender behaviors or predefined rules to evaluate the authenticity. To overcome this limitation, this study proposes a novel approach called zero-knowledge Proof of Traffic (zk-PoT), which involves generating cryptographic proofs to the traffic observations. Multiple independent proofs regarding the same vehicle can be deterministically cross-verified by any receivers without relying on ground truth, probabilistic, or plausibility evaluations. Additionally, no private information is compromised during the entire procedure. A full on-board unit software stack that reflects the behavior of zk-PoT is implemented within a specifically designed simulator called Flowsim. A comprehensive experimental analysis is then conducted using synthesized city-scale simulations, which demonstrates that zk-PoT's cross-verification ratio ranges between 80 % to 96 %, and 80 % of the verification is achieved in 2 s, with a protocol overhead of approximately 25 %. Furthermore, the analyses of various attacks indicate that most of the attacks could be prevented, and some, such as collusion attacks, can be mitigated. The proposed approach can be incorporated into existing works, including the European Telecommunications Standards Institute (ETSI) and the International Organization for Standardization (ISO) ITS standards, without disrupting the backward compatibility.
Related papers
- Verification Cost Asymmetry in Cognitive Warfare: A Complexity-Theoretic Framework [0.0]
We introduce the Verification Cost Asymmetry coefficient, formalizing it as the ratio of expected verification work between populations under identical claim distributions.<n>We construct dissemination protocols that reduce verification for trusted audiences to constant human effort while imposing superlinear costs on adversarial populations lacking cryptographic infrastructure.<n>The results establish complexity-theoretic foundations for engineering democratic advantage in cognitive warfare, with immediate applications to content authentication, platform governance, and information operations doctrine.
arXiv Detail & Related papers (2025-07-28T18:23:44Z) - CANTXSec: A Deterministic Intrusion Detection and Prevention System for CAN Bus Monitoring ECU Activations [53.036288487863786]
We propose CANTXSec, the first deterministic Intrusion Detection and Prevention system based on physical ECU activations.<n>It detects and prevents classical attacks in the CAN bus, while detecting advanced attacks that have been less investigated in the literature.<n>We prove the effectiveness of our solution on a physical testbed, where we achieve 100% detection accuracy in both classes of attacks while preventing 100% of FIAs.
arXiv Detail & Related papers (2025-05-14T13:37:07Z) - Lie Detector: Unified Backdoor Detection via Cross-Examination Framework [68.45399098884364]
We propose a unified backdoor detection framework in the semi-honest setting.
Our method achieves superior detection performance, improving accuracy by 5.4%, 1.6%, and 11.9% over SoTA baselines.
Notably, it is the first to effectively detect backdoors in multimodal large language models.
arXiv Detail & Related papers (2025-03-21T06:12:06Z) - Anomaly Detection in Cooperative Vehicle Perception Systems under Imperfect Communication [4.575903181579272]
We propose a cooperative-perception-based anomaly detection framework (CPAD)
CPAD is a robust architecture that remains effective under communication interruptions.
Empirical results demonstrate that our approach outperforms standard anomaly classification methods in F1-score, AUC.
arXiv Detail & Related papers (2025-01-28T22:41:06Z) - Distribution-Free Calibration of Statistical Confidence Sets [2.283561089098417]
We introduce two novel methods, TRUST and TRUST++, for calibrating confidence sets to achieve distribution-free conditional coverage.
We demonstrate that our methods outperform existing approaches, particularly in small-sample regimes.
arXiv Detail & Related papers (2024-11-28T20:45:59Z) - Cyber Attacks Prevention Towards Prosumer-based EV Charging Stations: An Edge-assisted Federated Prototype Knowledge Distillation Approach [25.244719630000407]
This paper covers two aspects: 1) cyber-attack detection on prosumers' network traffic (NT) data, and 2) cyber-attack intervention.
We propose an edge-assisted federated prototype knowledge distillation (E-FPKD) approach, where each client is deployed on a dedicated local edge server (DLES)
Experimental analysis demonstrates that the proposed E-FPKD can achieve the largest ODC on NSL-KDD, UNSW-NB15, and IoTID20 datasets.
arXiv Detail & Related papers (2024-10-17T06:31:55Z) - Conformal Validity Guarantees Exist for Any Data Distribution (and How to Find Them) [14.396431159723297]
We show that conformal prediction can theoretically be extended to textitany joint data distribution.
Although the most general case is exceedingly impractical to compute, for concrete practical applications we outline a procedure for deriving specific conformal algorithms.
arXiv Detail & Related papers (2024-05-10T17:40:24Z) - Efficient Conformal Prediction under Data Heterogeneity [79.35418041861327]
Conformal Prediction (CP) stands out as a robust framework for uncertainty quantification.
Existing approaches for tackling non-exchangeability lead to methods that are not computable beyond the simplest examples.
This work introduces a new efficient approach to CP that produces provably valid confidence sets for fairly general non-exchangeable data distributions.
arXiv Detail & Related papers (2023-12-25T20:02:51Z) - Kick Bad Guys Out! Conditionally Activated Anomaly Detection in Federated Learning with Zero-Knowledge Proof Verification [22.078088272837068]
Federated Learning (FL) systems are vulnerable to adversarial attacks, such as model poisoning and backdoor attacks.<n>We propose a novel anomaly detection method designed specifically for practical FL scenarios.<n>Our approach employs a two-stage, conditionally activated detection mechanism.
arXiv Detail & Related papers (2023-10-06T07:09:05Z) - Prototype-based Aleatoric Uncertainty Quantification for Cross-modal
Retrieval [139.21955930418815]
Cross-modal Retrieval methods build similarity relations between vision and language modalities by jointly learning a common representation space.
However, the predictions are often unreliable due to the Aleatoric uncertainty, which is induced by low-quality data, e.g., corrupt images, fast-paced videos, and non-detailed texts.
We propose a novel Prototype-based Aleatoric Uncertainty Quantification (PAU) framework to provide trustworthy predictions by quantifying the uncertainty arisen from the inherent data ambiguity.
arXiv Detail & Related papers (2023-09-29T09:41:19Z) - Conservative Prediction via Data-Driven Confidence Minimization [70.93946578046003]
In safety-critical applications of machine learning, it is often desirable for a model to be conservative.
We propose the Data-Driven Confidence Minimization framework, which minimizes confidence on an uncertainty dataset.
arXiv Detail & Related papers (2023-06-08T07:05:36Z) - Untargeted Near-collision Attacks on Biometrics: Real-world Bounds and
Theoretical Limits [0.0]
We focus on untargeted attacks that can be carried out both online and offline, and in both identification and verification modes.
We use the False Match Rate (FMR) and the False Positive Identification Rate (FPIR) to address the security of these systems.
The study of this metric space, and system parameters, gives us the complexity of untargeted attacks and the probability of a near-collision.
arXiv Detail & Related papers (2023-04-04T07:17:31Z) - Relational Action Bases: Formalization, Effective Safety Verification,
and Invariants (Extended Version) [67.99023219822564]
We introduce the general framework of relational action bases (RABs)
RABs generalize existing models by lifting both restrictions.
We demonstrate the effectiveness of this approach on a benchmark of data-aware business processes.
arXiv Detail & Related papers (2022-08-12T17:03:50Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z) - Efficient statistical validation with edge cases to evaluate Highly
Automated Vehicles [6.198523595657983]
The widescale deployment of Autonomous Vehicles seems to be imminent despite many safety challenges that are yet to be resolved.
Existing standards focus on deterministic processes where the validation requires only a set of test cases that cover the requirements.
This paper presents a new approach to compute the statistical characteristics of a system's behaviour by biasing automatically generated test cases towards the worst case scenarios.
arXiv Detail & Related papers (2020-03-04T04:35:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.