A Parallel Region-Adaptive Differential Privacy Framework for Image Pixelization
- URL: http://arxiv.org/abs/2511.04261v1
- Date: Thu, 06 Nov 2025 10:51:20 GMT
- Title: A Parallel Region-Adaptive Differential Privacy Framework for Image Pixelization
- Authors: Ming Liu,
- Abstract summary: Differentially private pixelization offers mathematically guaranteed protection for visual data through noise addition.<n>We propose a novel parallel, region-adaptive pixelization framework that combines the theoretical rigor of differential privacy with practical efficiency.<n>Our method adaptively adjusts grid sizes and noise scales based on regional complexity, leveraging GPU parallelism to achieve significant runtime acceleration.<n>This validates its suitability for real-time privacy-critical applications such as elderly care, smart home monitoring, driver behavior analysis, and crowd behavior monitoring.
- Score: 4.738949927143789
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The widespread deployment of high-resolution visual sensing systems, coupled with the rise of foundation models, has amplified privacy risks in video-based applications. Differentially private pixelization offers mathematically guaranteed protection for visual data through grid-based noise addition, but challenges remain in preserving task-relevant fidelity, achieving scalability, and enabling efficient real-time deployment. To address this, we propose a novel parallel, region-adaptive pixelization framework that combines the theoretical rigor of differential privacy with practical efficiency. Our method adaptively adjusts grid sizes and noise scales based on regional complexity, leveraging GPU parallelism to achieve significant runtime acceleration compared to the classical baseline. A lightweight storage scheme is introduced by retaining only essential noisy statistics, significantly reducing space overhead. Formal privacy analysis is provided under the Laplace mechanism and parallel composition theorem. Extensive experiments on the PETS, Venice-2, and PPM-100 datasets demonstrate favorable privacy-utility trade-offs and significant runtime/storage reductions. A face re-identification attack experiment on CelebA further confirms the method's effectiveness in preventing identity inference. This validates its suitability for real-time privacy-critical applications such as elderly care, smart home monitoring, driver behavior analysis, and crowd behavior monitoring.
Related papers
- CityGuard: Graph-Aware Private Descriptors for Bias-Resilient Identity Search Across Urban Cameras [16.147944008359957]
CityGuard is a topology-aware transformer for privacy-preserving identity retrieval in decentralized surveillance.<n>A dispersion-adaptive metric learner adjusts instance-level margins according to feature spread, increasing intra-class compactness.<n>Private embedding maps are coupled with compact approximate indexes to support secure and cost-efficient deployment.
arXiv Detail & Related papers (2026-02-20T08:00:17Z) - Event-based Visual Deformation Measurement [76.25283405575108]
Visual Deformation Measurement aims to recover dense deformation fields by tracking surface motion from camera observations.<n>Traditional image-based methods rely on minimal inter-frame motion to constrain the correspondence search space.<n>We propose an event-frame fusion framework that exploits events for temporally dense motion cues and frames for spatially dense precise estimation.
arXiv Detail & Related papers (2026-02-16T01:04:48Z) - ImprovDML: Improved Trade-off in Private Byzantine-Resilient Distributed Machine Learning [22.85986751447643]
A common strategy involves integrating Byzantine-resilient aggregation rules with differential privacy mechanisms.<n>We propose ImprovDML, that achieves model accuracy while simultaneously ensuring privacy preservation.<n>We demonstrate that it enables an improved trade-off between model accuracy and differential privacy.
arXiv Detail & Related papers (2025-06-18T06:53:52Z) - Trade-offs in Privacy-Preserving Eye Tracking through Iris Obfuscation: A Benchmarking Study [44.44776028287441]
We benchmark blurring, noising, downsampling, rubber sheet model, and iris style transfer to obfuscate user identity.<n>Our experiments show that canonical image processing methods like blurring and noising cause a marginal impact on deep learning-based tasks.<n>While downsampling, rubber sheet model, and iris style transfer are effective in hiding user identifiers, iris style transfer, with higher computation cost, outperforms others in both utility tasks.
arXiv Detail & Related papers (2025-04-14T14:29:38Z) - Linear-Time User-Level DP-SCO via Robust Statistics [55.350093142673316]
User-level differentially private convex optimization (DP-SCO) has garnered significant attention due to the importance of safeguarding user privacy in machine learning applications.<n>Current methods, such as those based on differentially private gradient descent (DP-SGD), often struggle with high noise accumulation and suboptimal utility.<n>We introduce a novel linear-time algorithm that leverages robust statistics, specifically the median and trimmed mean, to overcome these challenges.
arXiv Detail & Related papers (2025-02-13T02:05:45Z) - Privacy-Preserving Diffusion Model Using Homomorphic Encryption [5.282062491549009]
We introduce a privacy-preserving stable diffusion framework leveraging homomorphic encryption, called HE-Diffusion.
We propose a novel min-distortion method that enables efficient partial image encryption.
We successfully implement HE-based privacy-preserving stable diffusion inference.
arXiv Detail & Related papers (2024-03-09T04:56:57Z) - TernaryVote: Differentially Private, Communication Efficient, and
Byzantine Resilient Distributed Optimization on Heterogeneous Data [50.797729676285876]
We propose TernaryVote, which combines a ternary compressor and the majority vote mechanism to realize differential privacy, gradient compression, and Byzantine resilience simultaneously.
We theoretically quantify the privacy guarantee through the lens of the emerging f-differential privacy (DP) and the Byzantine resilience of the proposed algorithm.
arXiv Detail & Related papers (2024-02-16T16:41:14Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - Decentralized Stochastic Optimization with Inherent Privacy Protection [103.62463469366557]
Decentralized optimization is the basic building block of modern collaborative machine learning, distributed estimation and control, and large-scale sensing.
Since involved data, privacy protection has become an increasingly pressing need in the implementation of decentralized optimization algorithms.
arXiv Detail & Related papers (2022-05-08T14:38:23Z) - Efficient Logistic Regression with Local Differential Privacy [0.0]
Internet of Things devices are expanding rapidly and generating huge amount of data.
There is an increasing need to explore data collected from these devices.
Collaborative learning provides a strategic solution for the Internet of Things settings but also raises public concern over data privacy.
arXiv Detail & Related papers (2022-02-05T22:44:03Z) - An automatic differentiation system for the age of differential privacy [65.35244647521989]
Tritium is an automatic differentiation-based sensitivity analysis framework for differentially private (DP) machine learning (ML)
We introduce Tritium, an automatic differentiation-based sensitivity analysis framework for differentially private (DP) machine learning (ML)
arXiv Detail & Related papers (2021-09-22T08:07:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.