Robust and Reusable Fuzzy Extractors for Low-entropy Rate Randomness Sources
- URL: http://arxiv.org/abs/2405.04021v1
- Date: Tue, 7 May 2024 05:48:02 GMT
- Title: Robust and Reusable Fuzzy Extractors for Low-entropy Rate Randomness Sources
- Authors: Somnath Panja, Shaoquan Jiang, Reihaneh Safavi-Naini,
- Abstract summary: Fuzzy extractors (FE) are cryptographic primitives that extract reliable cryptographic key from noisy real world random sources.
We consider information theoretic FEs, define a strong notion of reusability, and propose strongly robust and reusable FEs (srrFE)
We give two constructions, one for reusable FEs and one for srrFE with information theoretic (IT) security for structured sources.
- Score: 3.918940900258555
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fuzzy extractors (FE) are cryptographic primitives that extract reliable cryptographic key from noisy real world random sources such as biometric sources. The FE generation algorithm takes a source sample, extracts a key and generates some helper data that will be used by the reproduction algorithm to recover the key. Reusability of FE guarantees that security holds when FE is used multiple times with the same source, and robustness of FE requires tampering with the helper data be detectable. In this paper, we consider information theoretic FEs, define a strong notion of reusability, and propose strongly robust and reusable FEs (srrFE) that provides the strongest combined notion of reusability and robustness for FEs. We give two constructions, one for reusable FEs and one for srrFE with information theoretic (IT) security for structured sources. The constructions are for structured sources and use sample-then-lock approach. We discuss each construction and show their unique properties in relation to existing work. Construction 2 is the first robust and reusable FE with IT-security without assuming random oracle. The robustness is achieved by using an IT-secure MAC with security against key-shift attack, which can be of independent interest.
Related papers
- ReliabilityRAG: Effective and Provably Robust Defense for RAG-based Web-Search [69.60882125603133]
We present ReliabilityRAG, a framework for adversarial robustness that explicitly leverages reliability information of retrieved documents.<n>Our work is a significant step towards more effective, provably robust defenses against retrieved corpus corruption in RAG.
arXiv Detail & Related papers (2025-09-27T22:36:42Z) - Wrangling Entropy: Next-Generation Multi-Factor Key Derivation, Credential Hashing, and Credential Generation Functions [47.715495058757824]
We present a novel cryptanalytic technique designed to reveal pernicious leaks of entropy across multiple invocations of a cryptographic key derivation or hash function.<n>We show that it can be used to correctly identify each of the known vulnerabilities in the original MFKDF construction.<n>We propose a new construction for MFKDF2,'' a next-generation multi-factor key derivation function that can be proven to be end-to-end secure.
arXiv Detail & Related papers (2025-09-07T02:01:53Z) - Zero-Trust Foundation Models: A New Paradigm for Secure and Collaborative Artificial Intelligence for Internet of Things [61.43014629640404]
Zero-Trust Foundation Models (ZTFMs) embed zero-trust security principles into the lifecycle of foundation models (FMs) for Internet of Things (IoT) systems.<n>ZTFMs can enable secure, privacy-preserving AI across distributed, heterogeneous, and potentially adversarial IoT environments.
arXiv Detail & Related papers (2025-05-26T06:44:31Z) - Theoretical Insights in Model Inversion Robustness and Conditional Entropy Maximization for Collaborative Inference Systems [89.35169042718739]
collaborative inference enables end users to leverage powerful deep learning models without exposure of sensitive raw data to cloud servers.
Recent studies have revealed that these intermediate features may not sufficiently preserve privacy, as information can be leaked and raw data can be reconstructed via model inversion attacks (MIAs)
This work first theoretically proves that the conditional entropy of inputs given intermediate features provides a guaranteed lower bound on the reconstruction mean square error (MSE) under any MIA.
Then, we derive a differentiable and solvable measure for bounding this conditional entropy based on the Gaussian mixture estimation and propose a conditional entropy algorithm to enhance the inversion robustness
arXiv Detail & Related papers (2025-03-01T07:15:21Z) - TrustRAG: Enhancing Robustness and Trustworthiness in RAG [31.231916859341865]
TrustRAG is a framework that systematically filters compromised and irrelevant contents before they are retrieved for generation.
TrustRAG delivers substantial improvements in retrieval accuracy, efficiency, and attack resistance compared to existing approaches.
arXiv Detail & Related papers (2025-01-01T15:57:34Z) - Evaluating Evidential Reliability In Pattern Recognition Based On Intuitionistic Fuzzy Sets [9.542461785588925]
We propose an algorithm for quantifying the reliability of evidence sources, called Fuzzy Reliability Index (FRI)
The FRI algorithm is based on decision quantification rules derived from IFS, defining the contribution of different BPAs to correct decisions and deriving the evidential reliability from these contributions.
The proposed method effectively enhances the rationality of reliability estimation for evidence sources, making it particularly suitable for classification decision problems in complex scenarios.
arXiv Detail & Related papers (2024-10-30T08:05:26Z) - Statistical Test for Auto Feature Engineering by Selective Inference [12.703556860454565]
Auto Feature Engineering (AFE) plays a crucial role in developing practical machine learning pipelines.
We propose a new statistical test for generated features by AFE algorithms based on a framework called selective inference.
The proposed test can quantify the statistical significance of the generated features in the form of $p$-values, enabling theoretically guaranteed control of the risk of false findings.
arXiv Detail & Related papers (2024-10-13T12:26:51Z) - Rigorous Probabilistic Guarantees for Robust Counterfactual Explanations [80.86128012438834]
We show for the first time that computing the robustness of counterfactuals with respect to plausible model shifts is NP-complete.
We propose a novel probabilistic approach which is able to provide tight estimates of robustness with strong guarantees.
arXiv Detail & Related papers (2024-07-10T09:13:11Z) - Prototype-based Aleatoric Uncertainty Quantification for Cross-modal
Retrieval [139.21955930418815]
Cross-modal Retrieval methods build similarity relations between vision and language modalities by jointly learning a common representation space.
However, the predictions are often unreliable due to the Aleatoric uncertainty, which is induced by low-quality data, e.g., corrupt images, fast-paced videos, and non-detailed texts.
We propose a novel Prototype-based Aleatoric Uncertainty Quantification (PAU) framework to provide trustworthy predictions by quantifying the uncertainty arisen from the inherent data ambiguity.
arXiv Detail & Related papers (2023-09-29T09:41:19Z) - Exploring Incompatible Knowledge Transfer in Few-shot Image Generation [107.81232567861117]
Few-shot image generation learns to generate diverse and high-fidelity images from a target domain using a few reference samples.
Existing F SIG methods select, preserve and transfer prior knowledge from a source generator to learn the target generator.
We propose knowledge truncation, which is a complementary operation to knowledge preservation and is implemented by a lightweight pruning-based method.
arXiv Detail & Related papers (2023-04-15T14:57:15Z) - FederatedTrust: A Solution for Trustworthy Federated Learning [3.202927443898192]
The rapid expansion of the Internet of Things (IoT) has presented challenges for centralized Machine and Deep Learning (ML/DL) methods.
To address concerns regarding data privacy, collaborative and privacy-preserving ML/DL techniques like Federated Learning (FL) have emerged.
arXiv Detail & Related papers (2023-02-20T09:02:24Z) - Reliable Federated Disentangling Network for Non-IID Domain Feature [62.73267904147804]
In this paper, we propose a novel reliable federated disentangling network, termed RFedDis.
To the best of our knowledge, our proposed RFedDis is the first work to develop an FL approach based on evidential uncertainty combined with feature disentangling.
Our proposed RFedDis provides outstanding performance with a high degree of reliability as compared to other state-of-the-art FL approaches.
arXiv Detail & Related papers (2023-01-30T11:46:34Z) - Secure Neuroimaging Analysis using Federated Learning with Homomorphic
Encryption [14.269757725951882]
Federated learning (FL) enables distributed computation of machine learning models over disparate, remote data sources.
Recent membership attacks show that private or sensitive personal data can sometimes be leaked or inferred when model parameters or summary statistics are shared with a central site.
We propose a framework for secure FL using fully-homomorphic encryption (FHE)
arXiv Detail & Related papers (2021-08-07T12:15:52Z) - Quality of Service Guarantees for Physical Unclonable Functions [90.99207266853986]
noisy physical unclonable function (PUF) outputs facilitate reliable, secure, and private key agreement.
We introduce a quality of service parameter to control the percentage of PUF outputs for which a target reliability level can be guaranteed.
arXiv Detail & Related papers (2021-07-12T18:26:08Z) - GAN-MDF: A Method for Multi-fidelity Data Fusion in Digital Twins [82.71367011801242]
Internet of Things (IoT) collects real-time data of physical systems, such as smart factory, intelligent robot and healtcare system.
High-fidelity (HF) responses describe the system of interest accurately but are computed costly.
Low-fidelity (LF) responses have a low computational cost but could not meet the required accuracy.
We propose a novel generative adversarial network for MDF in digital twins (GAN-MDF)
arXiv Detail & Related papers (2021-06-24T06:40:35Z) - Detecting Security Fixes in Open-Source Repositories using Static Code
Analyzers [8.716427214870459]
We study the extent to which the output of off-the-shelf static code analyzers can be used as a source of features to represent commits in Machine Learning (ML) applications.
We investigate how such features can be used to construct embeddings and train ML models to automatically identify source code commits that contain vulnerability fixes.
We find that the combination of our method with commit2vec represents a tangible improvement over the state of the art in the automatic identification of commits that fix vulnerabilities.
arXiv Detail & Related papers (2021-05-07T15:57:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.