PriMask: Cascadable and Collusion-Resilient Data Masking for Mobile
Cloud Inference
- URL: http://arxiv.org/abs/2211.06716v1
- Date: Sat, 12 Nov 2022 17:54:13 GMT
- Title: PriMask: Cascadable and Collusion-Resilient Data Masking for Mobile
Cloud Inference
- Authors: Linshan Jiang, Qun Song, Rui Tan, Mo Li
- Abstract summary: A mobile device uses a secret small-scale neural network called MaskNet to mask the data before transmission.
PriMask significantly weakens the cloud's capability to recover the data or extract certain private attributes.
We apply PriMask to three mobile sensing applications with diverse modalities and complexities.
- Score: 8.699639153183723
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Mobile cloud offloading is indispensable for inference tasks based on
large-scale deep models. However, transmitting privacy-rich inference data to
the cloud incurs concerns. This paper presents the design of a system called
PriMask, in which the mobile device uses a secret small-scale neural network
called MaskNet to mask the data before transmission. PriMask significantly
weakens the cloud's capability to recover the data or extract certain private
attributes. The MaskNet is em cascadable in that the mobile can opt in to or
out of its use seamlessly without any modifications to the cloud's inference
service. Moreover, the mobiles use different MaskNets, such that the collusion
between the cloud and some mobiles does not weaken the protection for other
mobiles. We devise a {\em split adversarial learning} method to train a neural
network that generates a new MaskNet quickly (within two seconds) at run time.
We apply PriMask to three mobile sensing applications with diverse modalities
and complexities, i.e., human activity recognition, urban environment
crowdsensing, and driver behavior recognition. Results show PriMask's
effectiveness in all three applications.
Related papers
- MaDi: Learning to Mask Distractions for Generalization in Visual Deep
Reinforcement Learning [40.7452827298478]
We introduce MaDi, a novel algorithm that learns to mask distractions by the reward signal only.
In MaDi, the conventional actor-critic structure of deep reinforcement learning agents is complemented by a small third sibling, the Masker.
Our algorithm improves the agent's focus with useful masks, while its efficient Masker network only adds 0.2% more parameters to the original structure.
arXiv Detail & Related papers (2023-12-23T20:11:05Z) - Mobile-Cloud Inference for Collaborative Intelligence [3.04585143845864]
There is an increasing need for faster execution and lower energy consumption for deep learning model inference.
Historically, the models run on mobile devices have been smaller and simpler in comparison to large state-of-the-art research models, which can only run on the cloud.
Cloud-only inference has drawbacks such as increased network bandwidth consumption and higher latency.
There is an alternative approach: shared mobile-cloud inference.
arXiv Detail & Related papers (2023-06-24T14:22:53Z) - Towards Improved Input Masking for Convolutional Neural Networks [66.99060157800403]
We propose a new masking method for CNNs we call layer masking.
We show that our method is able to eliminate or minimize the influence of the mask shape or color on the output of the model.
We also demonstrate how the shape of the mask may leak information about the class, thus affecting estimates of model reliance on class-relevant features.
arXiv Detail & Related papers (2022-11-26T19:31:49Z) - Mask or Non-Mask? Robust Face Mask Detector via Triplet-Consistency
Representation Learning [23.062034116854875]
In the absence of vaccines or medicines to stop COVID-19, one of the effective methods to slow the spread of the coronavirus is to wear a face mask.
To mandate the use of face masks or coverings in public areas, additional human resources are required, which is tedious and attention-intensive.
We propose a face mask detection framework that uses the context attention module to enable the effective attention of the feed-forward convolution neural network.
arXiv Detail & Related papers (2021-10-01T16:44:06Z) - Contrastive Context-Aware Learning for 3D High-Fidelity Mask Face
Presentation Attack Detection [103.7264459186552]
Face presentation attack detection (PAD) is essential to secure face recognition systems.
Most existing 3D mask PAD benchmarks suffer from several drawbacks.
We introduce a largescale High-Fidelity Mask dataset to bridge the gap to real-world applications.
arXiv Detail & Related papers (2021-04-13T12:48:38Z) - Facial Masks and Soft-Biometrics: Leveraging Face Recognition CNNs for
Age and Gender Prediction on Mobile Ocular Images [53.913598771836924]
We address the use of selfie ocular images captured with smartphones to estimate age and gender.
We adapt two existing lightweight CNNs proposed in the context of the ImageNet Challenge.
Some networks are further pre-trained for face recognition, for which very large training databases are available.
arXiv Detail & Related papers (2021-03-31T01:48:29Z) - Mask Attention Networks: Rethinking and Strengthen Transformer [70.95528238937861]
Transformer is an attention-based neural network, which consists of two sublayers, Self-Attention Network (SAN) and Feed-Forward Network (FFN)
arXiv Detail & Related papers (2021-03-25T04:07:44Z) - BinaryCoP: Binary Neural Network-based COVID-19 Face-Mask Wear and
Positioning Predictor on Edge Devices [63.56630165340053]
Face masks offer an effective solution in healthcare for bi-directional protection against air-borne diseases.
CNNs offer an excellent solution for face recognition and classification of correct mask wearing and positioning.
CNNs can be used at entrances to corporate buildings, airports, shopping areas, and other indoor locations, to mitigate the spread of the virus.
arXiv Detail & Related papers (2021-02-06T00:14:06Z) - Shared Mobile-Cloud Inference for Collaborative Intelligence [35.103437828235826]
We present a shared mobile-cloud inference approach for neural model inference.
The strategy can improve inference latency, energy consumption, and network bandwidth usage.
Further performance gain can be achieved by compressing the feature tensor before its transmission.
arXiv Detail & Related papers (2020-02-01T07:12:01Z) - Ternary Feature Masks: zero-forgetting for task-incremental learning [68.34518408920661]
We propose an approach without any forgetting to continual learning for the task-aware regime.
By using ternary masks we can upgrade a model to new tasks, reusing knowledge from previous tasks while not forgetting anything about them.
Our method outperforms current state-of-the-art while reducing memory overhead in comparison to weight-based approaches.
arXiv Detail & Related papers (2020-01-23T18:08:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.