Adversary Resilient Learned Bloom Filters
- URL: http://arxiv.org/abs/2409.06556v4
- Date: Thu, 09 Jan 2025 14:56:37 GMT
- Title: Adversary Resilient Learned Bloom Filters
- Authors: Allison Bishop, Hayder Tirmazi,
- Abstract summary: We define a strong adversarial model for the Learned Bloom Filter.<n>Using our model, we construct an adversary-resilient variant of the Learned Bloom Filter called Downtown Bodega Filter.
- Score: 0.4910937238451484
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Learned Bloom Filter is a recently proposed data structure that combines the Bloom Filter with a Learning Model while preserving the Bloom Filter's one-sided error guarantees. Creating an adversary-resilient construction of the Learned Bloom Filter with provable guarantees is an open problem. We define a strong adversarial model for the Learned Bloom Filter. Our adversarial model extends an existing adversarial model designed for the Classical (i.e. not "Learned") Bloom Filter by prior work and considers computationally bounded adversaries that run in probabilistic polynomial time (PPT). Using our model, we construct an adversary-resilient variant of the Learned Bloom Filter called the Downtown Bodega Filter. We show that: if pseudo-random permutations exist, then an Adversary Resilient Learned Bloom Filter may be constructed with $2\lambda$ extra bits of memory and at most one extra pseudo-random permutation in the critical path. We construct a hybrid adversarial model for the case where a fraction of the query workload is chosen by an adversary. We show realistic scenarios where using the Downtown Bodega Filter gives better performance guarantees compared to alternative approaches in this hybrid model.
Related papers
- Efficient Full-Stack Private Federated Deep Learning with Post-Quantum Security [17.45950557331482]
Federated learning (FL) enables collaborative model training while preserving user data privacy by keeping data local.<n>Despite these advantages, FL remains vulnerable to privacy attacks on user updates and model parameters during training and deployment.<n>We introduce Beskar, a novel framework that provides post-quantum secure aggregation.
arXiv Detail & Related papers (2025-05-09T03:20:48Z) - CP-NCBF: A Conformal Prediction-based Approach to Synthesize Verified Neural Control Barrier Functions [2.092779643426281]
Control Barrier Functions (CBFs) are a practical approach for designing safety-critical controllers.<n>Recent efforts have explored learning-based methods, such as neural CBFs, to address this issue.<n>We propose a novel framework that leverages split-conformal prediction to generate formally verified neural CBFs.
arXiv Detail & Related papers (2025-03-18T10:01:06Z) - Cascaded Learned Bloom Filter for Optimal Model-Filter Size Balance and Fast Rejection [12.555117983678624]
We propose the Cascaded Learned Bloom Filter (CLBF) to address these issues.
Our dynamic programming-based optimization automatically selects configurations that achieve an optimal balance between the model and filter sizes.
Experiments on real-world datasets show that CLBF reduces memory usage by up to 24% and decreases reject time by up to 14 times compared to state-of-the-art learned Bloom filters.
arXiv Detail & Related papers (2025-02-06T01:05:41Z) - DPBloomfilter: Securing Bloom Filters with Differential Privacy [17.002793355495136]
The DPBloomfilter integrates the classical differential privacy mechanism, specifically the Random Response technique, into the Bloom filter.
It offers robust privacy guarantees under the same running complexity as the standard Bloom filter.
This is the first work to provide differential privacy guarantees for the Bloom filter for membership query problems.
arXiv Detail & Related papers (2025-02-02T06:47:50Z) - A Privacy Model for Classical & Learned Bloom Filters [0.0]
The Classical Bloom Filter (CBF) is a class of Probabilistic Data Structures (PDS)
The Learned Bloom Filter (LBF) is a recently proposed class of PDS that combines the Classical Bloom Filter with a Learning Model.
This paper conducts a rigorous differential privacy-based analysis for the Bloom Filter.
arXiv Detail & Related papers (2025-01-27T03:35:25Z) - Learning for Layered Safety-Critical Control with Predictive Control Barrier Functions [26.692688699287213]
Safety filters leveraging control barrier functions (CBFs) are highly effective for enforcing safe behavior on complex systems.<n> gaps between the RoM and FoM can result in safety violations.<n>This paper introduces emphpredictive CBFs to address this gap by leveraging rollouts of the FoM to define a predictive robustness term.
arXiv Detail & Related papers (2024-12-05T23:05:25Z) - Convergent Differential Privacy Analysis for General Federated Learning: the $f$-DP Perspective [57.35402286842029]
Federated learning (FL) is an efficient collaborative training paradigm with a focus on local privacy.
differential privacy (DP) is a classical approach to capture and ensure the reliability of private protections.
arXiv Detail & Related papers (2024-08-28T08:22:21Z) - Bi-Factorial Preference Optimization: Balancing Safety-Helpfulness in Language Models [94.39278422567955]
Fine-tuning large language models (LLMs) on human preferences has proven successful in enhancing their capabilities.<n>However, ensuring the safety of LLMs during fine-tuning remains a critical concern.<n>We propose a supervised learning framework called Bi-Factorial Preference Optimization (BFPO) to address this issue.
arXiv Detail & Related papers (2024-08-27T17:31:21Z) - Breaking Free: How to Hack Safety Guardrails in Black-Box Diffusion Models! [52.0855711767075]
EvoSeed is an evolutionary strategy-based algorithmic framework for generating photo-realistic natural adversarial samples.
We employ CMA-ES to optimize the search for an initial seed vector, which, when processed by the Conditional Diffusion Model, results in the natural adversarial sample misclassified by the Model.
Experiments show that generated adversarial images are of high image quality, raising concerns about generating harmful content bypassing safety classifiers.
arXiv Detail & Related papers (2024-02-07T09:39:29Z) - Filter Pruning for Efficient CNNs via Knowledge-driven Differential
Filter Sampler [103.97487121678276]
Filter pruning simultaneously accelerates the computation and reduces the memory overhead of CNNs.
We propose a novel Knowledge-driven Differential Filter Sampler(KDFS) with Masked Filter Modeling(MFM) framework for filter pruning.
arXiv Detail & Related papers (2023-07-01T02:28:41Z) - Generative Slate Recommendation with Reinforcement Learning [49.75985313698214]
reinforcement learning algorithms can be used to optimize user engagement in recommender systems.
However, RL approaches are intractable in the slate recommendation scenario.
In that setting, an action corresponds to a slate that may contain any combination of items.
In this work we propose to encode slates in a continuous, low-dimensional latent space learned by a variational auto-encoder.
We are able to (i) relax assumptions required by previous work, and (ii) improve the quality of the action selection by modeling full slates.
arXiv Detail & Related papers (2023-01-20T15:28:09Z) - A Critical Analysis of Classifier Selection in Learned Bloom Filters [0.3359875577705538]
"Complexity" of the data used to build the filter might heavily impact on its performance.
We propose a novel methodology, supported by software, for designing, analyzing and implementing Learned Bloom Filters.
Experiments show that the proposed methodology and the supporting software are valid and useful.
arXiv Detail & Related papers (2022-11-28T17:17:18Z) - Compressing (Multidimensional) Learned Bloom Filters [7.6058140480517356]
A Bloom filter reveals if an element is not included in the underlying set or is included with a certain error rate.
Deep learning models are used to solve this membership testing problem.
We show that the benefits of learned Bloom filters are apparent only when considering a vast amount of data.
arXiv Detail & Related papers (2022-08-05T07:54:48Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - Proteus: A Self-Designing Range Filter [29.998915706012642]
Proteus is a novel self-designing approximate range filter.
It unifies the probabilistic and deterministic design spaces of state-of-the-art range filters.
Proteus is able to improve end-to-end performance by as much as 5.3x over more brittle state-of-the-art methods.
arXiv Detail & Related papers (2022-06-30T22:43:34Z) - Adversarial Robustness through the Lens of Convolutional Filters [2.0305676256390934]
We investigate 3x3 convolution filters that form in adversarially-trained models.
Filters are extracted from 71 public models of the linf-RobustBench CIFAR-10/100 and ImageNet1k leaderboard.
arXiv Detail & Related papers (2022-04-05T20:29:16Z) - Gaussian Control Barrier Functions : A Non-Parametric Paradigm to Safety [7.921648699199647]
We propose a non-parametric approach for online synthesis of CBFs using Gaussian Processes (GPs)
GPs have favorable properties, in addition to being non-parametric, such as analytical tractability and robust uncertainty estimation.
We validate our approach experimentally on a quad by demonstrating safe control for fixed but arbitrary safe sets.
arXiv Detail & Related papers (2022-03-29T12:21:28Z) - Learning Versatile Convolution Filters for Efficient Visual Recognition [125.34595948003745]
This paper introduces versatile filters to construct efficient convolutional neural networks.
We conduct theoretical analysis on network complexity and an efficient convolution scheme is introduced.
Experimental results on benchmark datasets and neural networks demonstrate that our versatile filters are able to achieve comparable accuracy as that of original filters.
arXiv Detail & Related papers (2021-09-20T06:07:14Z) - Pointwise Feasibility of Gaussian Process-based Safety-Critical Control
under Model Uncertainty [77.18483084440182]
Control Barrier Functions (CBFs) and Control Lyapunov Functions (CLFs) are popular tools for enforcing safety and stability of a controlled system, respectively.
We present a Gaussian Process (GP)-based approach to tackle the problem of model uncertainty in safety-critical controllers that use CBFs and CLFs.
arXiv Detail & Related papers (2021-06-13T23:08:49Z) - Unsharp Mask Guided Filtering [53.14430987860308]
The goal of this paper is guided image filtering, which emphasizes the importance of structure transfer during filtering.
We propose a new and simplified formulation of the guided filter inspired by unsharp masking.
Our formulation enjoys a filtering prior to a low-pass filter and enables explicit structure transfer by estimating a single coefficient.
arXiv Detail & Related papers (2021-06-02T19:15:34Z) - Safe Exploration in Model-based Reinforcement Learning using Control
Barrier Functions [1.005130974691351]
We develop a novel class of CBFs that retain the beneficial properties of CBFs for developing minimally-invasive safe control policies.
We show how these LCBFs can be used to augment a learning-based control policy so as to guarantee safety and then leverage this approach to develop a safe exploration framework.
arXiv Detail & Related papers (2021-04-16T15:29:58Z) - Adversarial Robustness by Design through Analog Computing and Synthetic
Gradients [80.60080084042666]
We propose a new defense mechanism against adversarial attacks inspired by an optical co-processor.
In the white-box setting, our defense works by obfuscating the parameters of the random projection.
We find the combination of a random projection and binarization in the optical system also improves robustness against various types of black-box attacks.
arXiv Detail & Related papers (2021-01-06T16:15:29Z) - Training Interpretable Convolutional Neural Networks by Differentiating
Class-specific Filters [64.46270549587004]
Convolutional neural networks (CNNs) have been successfully used in a range of tasks.
CNNs are often viewed as "black-box" and lack of interpretability.
We propose a novel strategy to train interpretable CNNs by encouraging class-specific filters.
arXiv Detail & Related papers (2020-07-16T09:12:26Z) - Partitioned Learned Bloom Filter [31.748077944821315]
We show how to frame the problem of optimal model utilization as an optimization problem.
We derive algorithms that can achieve near-optimal performance in many cases.
arXiv Detail & Related papers (2020-06-05T00:05:32Z) - Selective Inference for Latent Block Models [50.83356836818667]
This study provides a selective inference method for latent block models.
We construct a statistical test on a set of row and column cluster memberships of a latent block model.
The proposed exact and approximated tests work effectively, compared to the naive test that did not take the selective bias into account.
arXiv Detail & Related papers (2020-05-27T10:44:19Z) - Reinforcement Learning for Safety-Critical Control under Model
Uncertainty, using Control Lyapunov Functions and Control Barrier Functions [96.63967125746747]
Reinforcement learning framework learns the model uncertainty present in the CBF and CLF constraints.
RL-CBF-CLF-QP addresses the problem of model uncertainty in the safety constraints.
arXiv Detail & Related papers (2020-04-16T10:51:33Z) - Learning Control Barrier Functions from Expert Demonstrations [69.23675822701357]
We propose a learning based approach to safe controller synthesis based on control barrier functions (CBFs)
We analyze an optimization-based approach to learning a CBF that enjoys provable safety guarantees under suitable Lipschitz assumptions on the underlying dynamical system.
To the best of our knowledge, these are the first results that learn provably safe control barrier functions from data.
arXiv Detail & Related papers (2020-04-07T12:29:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.