Multi-label Learning from Privacy-Label
- URL: http://arxiv.org/abs/2312.13312v1
- Date: Wed, 20 Dec 2023 09:09:56 GMT
- Title: Multi-label Learning from Privacy-Label
- Authors: Zhongnian Li, Haotian Ren, Tongfeng Sun, Zhichen Li
- Abstract summary: We propose a novel setting named Multi-Label Learning from Privacy-Label (MLLPL)
During the labeling phase, each privacy-label is randomly combined with a non-privacy label to form a Privacy-Label Unit (PLU)
If any label within a PLU is positive, the unit is labeled as positive; otherwise, it is labeled negative, as shown in Figure 1.
- Score: 6.403667773024114
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-abel Learning (MLL) often involves the assignment of multiple relevant
labels to each instance, which can lead to the leakage of sensitive information
(such as smoking, diseases, etc.) about the instances. However, existing MLL
suffer from failures in protection for sensitive information. In this paper, we
propose a novel setting named Multi-Label Learning from Privacy-Label (MLLPL),
which Concealing Labels via Privacy-Label Unit (CLPLU). Specifically, during
the labeling phase, each privacy-label is randomly combined with a non-privacy
label to form a Privacy-Label Unit (PLU). If any label within a PLU is
positive, the unit is labeled as positive; otherwise, it is labeled negative,
as shown in Figure 1. PLU ensures that only non-privacy labels are appear in
the label set, while the privacy-labels remain concealed. Moreover, we further
propose a Privacy-Label Unit Loss (PLUL) to learn the optimal classifier by
minimizing the empirical risk of PLU. Experimental results on multiple
benchmark datasets demonstrate the effectiveness and superiority of the
proposed method.
Related papers
- Mixed Blessing: Class-Wise Embedding guided Instance-Dependent Partial Label Learning [53.64180787439527]
In partial label learning (PLL), every sample is associated with a candidate label set comprising the ground-truth label and several noisy labels.
For the first time, we create class-wise embeddings for each sample, which allow us to explore the relationship of instance-dependent noisy labels.
To reduce the high label ambiguity, we introduce the concept of class prototypes containing global feature information.
arXiv Detail & Related papers (2024-12-06T13:25:39Z) - Learning from Concealed Labels [5.235218636685312]
We propose a novel setting to protect privacy of each instance, namely learning from concealed labels for multi-class classification.
Concealed labels prevent sensitive labels from appearing in the label set during the label collection stage, which specifies none and some random sampled insensitive labels as concealed labels set to annotate sensitive data.
arXiv Detail & Related papers (2024-12-03T08:00:19Z) - Differential Privacy in Continual Learning: Which Labels to Update? [14.721537886922864]
Continual learning conflicts with strict privacy required for sensitive training data.<n>We highlight that failing to account for privacy leakage through the set of labels a model can output can break the privacy of otherwise valid DP algorithms.
arXiv Detail & Related papers (2024-11-07T13:08:06Z) - Exploiting Conjugate Label Information for Multi-Instance Partial-Label Learning [61.00359941983515]
Multi-instance partial-label learning (MIPL) addresses scenarios where each training sample is represented as a multi-instance bag associated with a candidate label set containing one true label and several false positives.
ELIMIPL exploits the conjugate label information to improve the disambiguation performance.
arXiv Detail & Related papers (2024-08-26T15:49:31Z) - BadLabel: A Robust Perspective on Evaluating and Enhancing Label-noise
Learning [113.8799653759137]
We introduce a novel label noise type called BadLabel, which can significantly degrade the performance of existing LNL algorithms by a large margin.
BadLabel is crafted based on the label-flipping attack against standard classification.
We propose a robust LNL method that perturbs the labels in an adversarial manner at each epoch to make the loss values of clean and noisy labels again distinguishable.
arXiv Detail & Related papers (2023-05-28T06:26:23Z) - Imprecise Label Learning: A Unified Framework for Learning with Various Imprecise Label Configurations [91.67511167969934]
imprecise label learning (ILL) is a framework for the unification of learning with various imprecise label configurations.
We demonstrate that ILL can seamlessly adapt to partial label learning, semi-supervised learning, noisy label learning, and, more importantly, a mixture of these settings.
arXiv Detail & Related papers (2023-05-22T04:50:28Z) - Complementary Classifier Induced Partial Label Learning [54.61668156386079]
In partial label learning (PLL), each training sample is associated with a set of candidate labels, among which only one is valid.
In disambiguation, the existing works usually do not fully investigate the effectiveness of the non-candidate label set.
In this paper, we use the non-candidate labels to induce a complementary classifier, which naturally forms an adversarial relationship against the traditional classifier.
arXiv Detail & Related papers (2023-05-17T02:13:23Z) - Pushing One Pair of Labels Apart Each Time in Multi-Label Learning: From
Single Positive to Full Labels [29.11589378265006]
In Multi-Label Learning (MLL), it is extremely challenging to accurately annotate every appearing object due to expensive costs and limited knowledge.
Existing Multi-Label Learning methods assume unknown labels as negatives, which introduces false negatives as noisy labels.
We propose a more practical and cheaper alternative: Single Positive Multi-Label Learning (SPMLL), where only one positive label needs to be provided per sample.
arXiv Detail & Related papers (2023-02-28T16:08:12Z) - One Positive Label is Sufficient: Single-Positive Multi-Label Learning
with Label Enhancement [71.9401831465908]
We investigate single-positive multi-label learning (SPMLL) where each example is annotated with only one relevant label.
A novel method named proposed, i.e., Single-positive MultI-label learning with Label Enhancement, is proposed.
Experiments on benchmark datasets validate the effectiveness of the proposed method.
arXiv Detail & Related papers (2022-06-01T14:26:30Z) - Acknowledging the Unknown for Multi-label Learning with Single Positive
Labels [65.5889334964149]
Traditionally, all unannotated labels are assumed as negative labels in single positive multi-label learning (SPML)
We propose entropy-maximization (EM) loss to maximize the entropy of predicted probabilities for all unannotated labels.
Considering the positive-negative label imbalance of unannotated labels, we propose asymmetric pseudo-labeling (APL) with asymmetric-tolerance strategies and a self-paced procedure to provide more precise supervision.
arXiv Detail & Related papers (2022-03-30T11:43:59Z) - Does Label Differential Privacy Prevent Label Inference Attacks? [26.87328379562665]
Label differential privacy (label-DP) is a popular framework for training private ML models on datasets with public features and sensitive private labels.
Despite its rigorous privacy guarantee, it has been observed that in practice label-DP does not preclude label inference attacks (LIAs)
arXiv Detail & Related papers (2022-02-25T20:57:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.