Fast Private Adaptive Query Answering for Large Data Domains
- URL: http://arxiv.org/abs/2602.05674v1
- Date: Thu, 05 Feb 2026 13:57:56 GMT
- Title: Fast Private Adaptive Query Answering for Large Data Domains
- Authors: Miguel Fuentes, Brett Mullins, Yingtai Xiao, Daniel Kifer, Cameron Musco, Daniel Sheldon,
- Abstract summary: We introduce new techniques to integrate residual queries into state-of-the-art adaptive mechanisms such as AIM.<n>Together these contributions reduce error, improve speed, and simplify residual query operations.<n>We integrate these innovations into a new mechanism (AIM+GReM) which improves AIM by using fast residual-based reconstruction instead of a graphical model approach.
- Score: 24.608957804631462
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Privately releasing marginals of a tabular dataset is a foundational problem in differential privacy. However, state-of-the-art mechanisms suffer from a computational bottleneck when marginal estimates are reconstructed from noisy measurements. Recently, residual queries were introduced and shown to lead to highly efficient reconstruction in the batch query answering setting. We introduce new techniques to integrate residual queries into state-of-the-art adaptive mechanisms such as AIM. Our contributions include a novel conceptual framework for residual queries using multi-dimensional arrays, lazy updating strategies, and adaptive optimization of the per-round privacy budget allocation. Together these contributions reduce error, improve speed, and simplify residual query operations. We integrate these innovations into a new mechanism (AIM+GReM), which improves AIM by using fast residual-based reconstruction instead of a graphical model approach. Our mechanism is orders of magnitude faster than the original framework and demonstrates competitive error and greatly improved scalability.
Related papers
- MaRI: Accelerating Ranking Model Inference via Structural Re-parameterization in Large Scale Recommendation System [24.4139949756995]
We propose MaRI, a novel Matrix Re- Parametersized Inference framework.<n>It serves as a complementary approach to existing techniques while accelerating ranking model inference without any accuracy loss.<n>MaRI is motivated by the observation that user-side computation is redundant in feature fusion matrix multiplication.
arXiv Detail & Related papers (2026-02-26T15:19:43Z) - Revisiting the Privacy Risks of Split Inference: A GAN-Based Data Reconstruction Attack via Progressive Feature Optimization [49.32786615205064]
Split Inference (SI) partitions computation between edge devices and the cloud to reduce latency and protect user privacy.<n>Recent advances in Data Reconstruction Attacks (DRAs) reveal that intermediate features exchanged in SI can be exploited to recover sensitive input data.<n>Existing DRAs are typically effective only on shallow models and fail to fully leverage semantic priors.<n>We propose a novel GAN-based DRA framework with Progressive Feature Optimization (PFO), which decomposes the generator into hierarchical blocks and incrementally refines intermediate representations to enhance the semantic fidelity of reconstructed images.
arXiv Detail & Related papers (2025-08-28T10:00:39Z) - RMoA: Optimizing Mixture-of-Agents through Diversity Maximization and Residual Compensation [6.364685086217188]
We propose Residual Mixture-of-Agents (RMoA) to integrate residual connections to optimize efficiency and reliability.<n>RMoA achieves state-of-the-art performance on the benchmarks of across alignment, mathematical reasoning, code generation, and multitasking understanding.
arXiv Detail & Related papers (2025-05-30T10:23:11Z) - Towards Generalizable Trajectory Prediction Using Dual-Level Representation Learning And Adaptive Prompting [107.4034346788744]
Existing vehicle trajectory prediction models struggle with generalizability, prediction uncertainties, and handling complex interactions.<n>We propose Perceiver with Register queries (PerReg+), a novel trajectory prediction framework that introduces: (1) Dual-Level Representation Learning via Self-Distillation (SD) and Masked Reconstruction (MR), capturing global context and fine-grained details; (2) Enhanced Multimodality using register-based queries and pretraining, eliminating the need for clustering and suppression; and (3) Adaptive Prompt Tuning during fine-tuning, freezing the main architecture and optimizing a small number of prompts for efficient adaptation.
arXiv Detail & Related papers (2025-01-08T20:11:09Z) - The Efficiency vs. Accuracy Trade-off: Optimizing RAG-Enhanced LLM Recommender Systems Using Multi-Head Early Exit [46.37267466656765]
This paper presents an optimization framework that combines Retrieval-Augmented Generation (RAG) with an innovative multi-head early exit architecture.<n>Our experiments demonstrate how this architecture effectively decreases time without sacrificing the accuracy needed for reliable recommendation delivery.
arXiv Detail & Related papers (2025-01-04T03:26:46Z) - HAFLQ: Heterogeneous Adaptive Federated LoRA Fine-tuned LLM with Quantization [55.972018549438964]
Federated fine-tuning of pre-trained Large Language Models (LLMs) enables task-specific adaptation across diverse datasets while preserving privacy.<n>We propose HAFLQ (Heterogeneous Adaptive Federated Low-Rank Adaptation Fine-tuned LLM with Quantization), a novel framework for efficient and scalable fine-tuning of LLMs in heterogeneous environments.<n> Experimental results on the text classification task demonstrate that HAFLQ reduces memory usage by 31%, lowers communication cost by 49%, improves accuracy by 50%, and achieves faster convergence compared to the baseline method.
arXiv Detail & Related papers (2024-11-10T19:59:54Z) - Efficient and Private Marginal Reconstruction with Local Non-Negativity [28.968601257521644]
We introduce a principled and efficient postprocessing method ReM for reconstructing answers to marginal queries.<n>An extension GReM-LNN reconstructs marginals under Gaussian noise satisfying consistency and non-negativity.<n>We demonstrate the utility of ReM and GReM-LNN by applying them to improve existing private query answering mechanisms.
arXiv Detail & Related papers (2024-10-01T21:39:28Z) - Temporal Feature Matters: A Framework for Diffusion Model Quantization [105.3033493564844]
Diffusion models rely on the time-step for the multi-round denoising.<n>We introduce a novel quantization framework that includes three strategies.<n>This framework preserves most of the temporal information and ensures high-quality end-to-end generation.
arXiv Detail & Related papers (2024-07-28T17:46:15Z) - Efficient Micro-Structured Weight Unification and Pruning for Neural
Network Compression [56.83861738731913]
Deep Neural Network (DNN) models are essential for practical applications, especially for resource limited devices.
Previous unstructured or structured weight pruning methods can hardly truly accelerate inference.
We propose a generalized weight unification framework at a hardware compatible micro-structured level to achieve high amount of compression and acceleration.
arXiv Detail & Related papers (2021-06-15T17:22:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.